00:00:00.000 Started by upstream project "autotest-per-patch" build number 132768 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.054 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.055 The recommended git tool is: git 00:00:00.055 using credential 00000000-0000-0000-0000-000000000002 00:00:00.058 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.075 Fetching changes from the remote Git repository 00:00:00.078 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.101 Using shallow fetch with depth 1 00:00:00.101 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.101 > git --version # timeout=10 00:00:00.131 > git --version # 'git version 2.39.2' 00:00:00.131 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.158 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.158 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.401 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.412 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.425 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.425 > git config core.sparsecheckout # timeout=10 00:00:04.435 > git read-tree -mu HEAD # timeout=10 00:00:04.450 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.472 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.472 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.577 [Pipeline] Start of Pipeline 00:00:04.591 [Pipeline] library 00:00:04.594 Loading library shm_lib@master 00:00:08.672 Library shm_lib@master is cached. Copying from home. 00:00:08.745 [Pipeline] node 00:00:08.865 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:08.868 [Pipeline] { 00:00:08.879 [Pipeline] catchError 00:00:08.882 [Pipeline] { 00:00:08.901 [Pipeline] wrap 00:00:08.916 [Pipeline] { 00:00:08.926 [Pipeline] stage 00:00:08.928 [Pipeline] { (Prologue) 00:00:09.165 [Pipeline] sh 00:00:10.052 + logger -p user.info -t JENKINS-CI 00:00:10.086 [Pipeline] echo 00:00:10.087 Node: GP11 00:00:10.096 [Pipeline] sh 00:00:10.447 [Pipeline] setCustomBuildProperty 00:00:10.458 [Pipeline] echo 00:00:10.460 Cleanup processes 00:00:10.465 [Pipeline] sh 00:00:10.759 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.759 27550 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.775 [Pipeline] sh 00:00:11.071 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.071 ++ grep -v 'sudo pgrep' 00:00:11.071 ++ awk '{print $1}' 00:00:11.071 + sudo kill -9 00:00:11.071 + true 00:00:11.086 [Pipeline] cleanWs 00:00:11.096 [WS-CLEANUP] Deleting project workspace... 00:00:11.096 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.109 [WS-CLEANUP] done 00:00:11.114 [Pipeline] setCustomBuildProperty 00:00:11.128 [Pipeline] sh 00:00:11.418 + sudo git config --global --replace-all safe.directory '*' 00:00:11.555 [Pipeline] httpRequest 00:00:13.455 [Pipeline] echo 00:00:13.457 Sorcerer 10.211.164.20 is alive 00:00:13.467 [Pipeline] retry 00:00:13.469 [Pipeline] { 00:00:13.482 [Pipeline] httpRequest 00:00:13.488 HttpMethod: GET 00:00:13.488 URL: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.490 Sending request to url: http://10.211.164.101/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.495 Response Code: HTTP/1.1 200 OK 00:00:13.495 Success: Status code 200 is in the accepted range: 200,404 00:00:13.495 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.805 [Pipeline] } 00:00:13.822 [Pipeline] // retry 00:00:13.829 [Pipeline] sh 00:00:14.125 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.145 [Pipeline] httpRequest 00:00:14.486 [Pipeline] echo 00:00:14.487 Sorcerer 10.211.164.101 is alive 00:00:14.494 [Pipeline] retry 00:00:14.496 [Pipeline] { 00:00:14.509 [Pipeline] httpRequest 00:00:14.514 HttpMethod: GET 00:00:14.515 URL: http://10.211.164.101/packages/spdk_c4269c6e2cd0445b86aa16195993e54ed2cad2dd.tar.gz 00:00:14.516 Sending request to url: http://10.211.164.101/packages/spdk_c4269c6e2cd0445b86aa16195993e54ed2cad2dd.tar.gz 00:00:14.519 Response Code: HTTP/1.1 404 Not Found 00:00:14.520 Success: Status code 404 is in the accepted range: 200,404 00:00:14.520 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_c4269c6e2cd0445b86aa16195993e54ed2cad2dd.tar.gz 00:00:14.523 [Pipeline] } 00:00:14.542 [Pipeline] // retry 00:00:14.550 [Pipeline] sh 00:00:14.866 + rm -f spdk_c4269c6e2cd0445b86aa16195993e54ed2cad2dd.tar.gz 00:00:14.883 [Pipeline] retry 00:00:14.885 [Pipeline] { 00:00:14.905 [Pipeline] checkout 00:00:14.913 The recommended git tool is: NONE 00:00:16.723 using credential 00000000-0000-0000-0000-000000000002 00:00:16.732 Wiping out workspace first. 00:00:16.744 Cloning the remote Git repository 00:00:16.746 Honoring refspec on initial clone 00:00:16.762 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:00:16.774 > git init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk # timeout=10 00:00:16.809 Using reference repository: /var/ci_repos/spdk_multi 00:00:16.810 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:00:16.810 > git --version # timeout=10 00:00:16.813 > git --version # 'git version 2.45.2' 00:00:16.814 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:16.820 Setting http proxy: proxy-dmz.intel.com:911 00:00:16.821 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/changes/04/25504/10 +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:38.150 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:00:38.156 > git config --add remote.origin.fetch refs/changes/04/25504/10 # timeout=10 00:00:38.161 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:38.619 Avoid second fetch 00:00:38.650 Checking out Revision c4269c6e2cd0445b86aa16195993e54ed2cad2dd (FETCH_HEAD) 00:00:39.154 Commit message: "lib/reduce: Support storing metadata on backing dev. (5 of 5, test cases)" 00:00:39.163 First time build. Skipping changelog. 00:00:38.623 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:38.643 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:38.659 > git config core.sparsecheckout # timeout=10 00:00:38.663 > git checkout -f c4269c6e2cd0445b86aa16195993e54ed2cad2dd # timeout=10 00:00:39.158 > git rev-list --no-walk 961c68b08c66ab95493bd99b2eb21fd28b63039e # timeout=10 00:00:39.169 > git remote # timeout=10 00:00:39.173 > git submodule init # timeout=10 00:00:39.233 > git submodule sync # timeout=10 00:00:39.280 > git config --get remote.origin.url # timeout=10 00:00:39.290 > git submodule init # timeout=10 00:00:39.336 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:00:39.340 > git config --get submodule.dpdk.url # timeout=10 00:00:39.344 > git remote # timeout=10 00:00:39.349 > git config --get remote.origin.url # timeout=10 00:00:39.353 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:00:39.367 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:00:39.371 > git remote # timeout=10 00:00:39.376 > git config --get remote.origin.url # timeout=10 00:00:39.381 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:00:39.385 > git config --get submodule.isa-l.url # timeout=10 00:00:39.390 > git remote # timeout=10 00:00:39.394 > git config --get remote.origin.url # timeout=10 00:00:39.399 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:00:39.404 > git config --get submodule.ocf.url # timeout=10 00:00:39.409 > git remote # timeout=10 00:00:39.414 > git config --get remote.origin.url # timeout=10 00:00:39.419 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:00:39.423 > git config --get submodule.libvfio-user.url # timeout=10 00:00:39.426 > git remote # timeout=10 00:00:39.430 > git config --get remote.origin.url # timeout=10 00:00:39.434 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:00:39.438 > git config --get submodule.xnvme.url # timeout=10 00:00:39.441 > git remote # timeout=10 00:00:39.445 > git config --get remote.origin.url # timeout=10 00:00:39.448 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:00:39.452 > git config --get submodule.isa-l-crypto.url # timeout=10 00:00:39.455 > git remote # timeout=10 00:00:39.459 > git config --get remote.origin.url # timeout=10 00:00:39.462 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:00:39.478 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:39.478 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:39.479 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:39.479 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:39.479 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:39.479 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:39.479 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:39.499 Setting http proxy: proxy-dmz.intel.com:911 00:00:39.499 Setting http proxy: proxy-dmz.intel.com:911 00:00:39.499 Setting http proxy: proxy-dmz.intel.com:911 00:00:39.499 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:00:39.499 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:00:39.499 Setting http proxy: proxy-dmz.intel.com:911 00:00:39.499 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:00:39.500 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:00:39.500 Setting http proxy: proxy-dmz.intel.com:911 00:00:39.500 Setting http proxy: proxy-dmz.intel.com:911 00:00:39.500 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:00:39.500 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:00:39.500 Setting http proxy: proxy-dmz.intel.com:911 00:00:39.500 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:00:51.157 [Pipeline] dir 00:00:51.158 Running in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:51.160 [Pipeline] { 00:00:51.175 [Pipeline] sh 00:00:51.471 ++ nproc 00:00:51.471 + threads=48 00:00:51.471 + git repack -a -d --threads=48 00:00:58.064 + git submodule foreach git repack -a -d --threads=48 00:00:58.064 Entering 'dpdk' 00:01:08.063 Entering 'intel-ipsec-mb' 00:01:08.063 Entering 'isa-l' 00:01:08.063 Entering 'isa-l-crypto' 00:01:08.063 Entering 'libvfio-user' 00:01:08.063 Entering 'ocf' 00:01:08.063 Entering 'xnvme' 00:01:08.638 + find .git -type f -name alternates -print -delete 00:01:08.638 .git/objects/info/alternates 00:01:08.638 .git/modules/libvfio-user/objects/info/alternates 00:01:08.638 .git/modules/isa-l-crypto/objects/info/alternates 00:01:08.638 .git/modules/intel-ipsec-mb/objects/info/alternates 00:01:08.638 .git/modules/ocf/objects/info/alternates 00:01:08.638 .git/modules/dpdk/objects/info/alternates 00:01:08.638 .git/modules/xnvme/objects/info/alternates 00:01:08.638 .git/modules/isa-l/objects/info/alternates 00:01:08.650 [Pipeline] } 00:01:08.668 [Pipeline] // dir 00:01:08.673 [Pipeline] } 00:01:08.689 [Pipeline] // retry 00:01:08.698 [Pipeline] sh 00:01:08.990 + hash pigz 00:01:08.990 + tar -cf spdk_c4269c6e2cd0445b86aa16195993e54ed2cad2dd.tar.gz -I pigz spdk 00:01:09.577 [Pipeline] retry 00:01:09.579 [Pipeline] { 00:01:09.594 [Pipeline] httpRequest 00:01:09.601 HttpMethod: PUT 00:01:09.602 URL: http://10.211.164.101/cgi-bin/sorcerer.py?group=packages&filename=spdk_c4269c6e2cd0445b86aa16195993e54ed2cad2dd.tar.gz 00:01:09.610 Sending request to url: http://10.211.164.101/cgi-bin/sorcerer.py?group=packages&filename=spdk_c4269c6e2cd0445b86aa16195993e54ed2cad2dd.tar.gz 00:01:12.235 Response Code: HTTP/1.1 200 OK 00:01:12.242 Success: Status code 200 is in the accepted range: 200 00:01:12.246 [Pipeline] } 00:01:12.265 [Pipeline] // retry 00:01:12.299 [Pipeline] echo 00:01:12.301 00:01:12.301 Locking 00:01:12.301 Waited 0s for lock 00:01:12.301 File already exists: /storage/packages/spdk_c4269c6e2cd0445b86aa16195993e54ed2cad2dd.tar.gz 00:01:12.301 00:01:12.306 [Pipeline] sh 00:01:12.608 + git -C spdk log --oneline -n5 00:01:12.608 c4269c6e2 lib/reduce: Support storing metadata on backing dev. (5 of 5, test cases) 00:01:12.608 75bc78f30 lib/reduce: Support storing metadata on backing dev. (4 of 5, data unmap with async metadata) 00:01:12.608 b67dc21ec lib/reduce: Support storing metadata on backing dev. (3 of 5, reload process) 00:01:12.608 c0f3f2d18 lib/reduce: Support storing metadata on backing dev. (2 of 5, data r/w with async metadata) 00:01:12.608 7ab149b9a lib/reduce: Support storing metadata on backing dev. (1 of 5, struct define and init process) 00:01:12.637 [Pipeline] } 00:01:12.650 [Pipeline] // stage 00:01:12.655 [Pipeline] stage 00:01:12.657 [Pipeline] { (Prepare) 00:01:12.666 [Pipeline] writeFile 00:01:12.676 [Pipeline] sh 00:01:12.996 + logger -p user.info -t JENKINS-CI 00:01:13.071 [Pipeline] sh 00:01:13.356 + logger -p user.info -t JENKINS-CI 00:01:13.367 [Pipeline] sh 00:01:13.642 + cat autorun-spdk.conf 00:01:13.642 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.642 SPDK_TEST_NVMF=1 00:01:13.642 SPDK_TEST_NVME_CLI=1 00:01:13.642 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.642 SPDK_TEST_NVMF_NICS=e810 00:01:13.642 SPDK_TEST_VFIOUSER=1 00:01:13.642 SPDK_RUN_UBSAN=1 00:01:13.642 NET_TYPE=phy 00:01:13.649 RUN_NIGHTLY=0 00:01:13.653 [Pipeline] readFile 00:01:13.680 [Pipeline] withEnv 00:01:13.682 [Pipeline] { 00:01:13.694 [Pipeline] sh 00:01:13.980 + set -ex 00:01:13.980 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:13.980 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.980 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.980 ++ SPDK_TEST_NVMF=1 00:01:13.980 ++ SPDK_TEST_NVME_CLI=1 00:01:13.980 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.980 ++ SPDK_TEST_NVMF_NICS=e810 00:01:13.980 ++ SPDK_TEST_VFIOUSER=1 00:01:13.980 ++ SPDK_RUN_UBSAN=1 00:01:13.980 ++ NET_TYPE=phy 00:01:13.980 ++ RUN_NIGHTLY=0 00:01:13.980 + case $SPDK_TEST_NVMF_NICS in 00:01:13.980 + DRIVERS=ice 00:01:13.980 + [[ tcp == \r\d\m\a ]] 00:01:13.980 + [[ -n ice ]] 00:01:13.980 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:13.980 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:17.269 rmmod: ERROR: Module irdma is not currently loaded 00:01:17.269 rmmod: ERROR: Module i40iw is not currently loaded 00:01:17.269 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:17.269 + true 00:01:17.269 + for D in $DRIVERS 00:01:17.269 + sudo modprobe ice 00:01:17.269 + exit 0 00:01:17.278 [Pipeline] } 00:01:17.287 [Pipeline] // withEnv 00:01:17.291 [Pipeline] } 00:01:17.299 [Pipeline] // stage 00:01:17.305 [Pipeline] catchError 00:01:17.306 [Pipeline] { 00:01:17.314 [Pipeline] timeout 00:01:17.315 Timeout set to expire in 1 hr 0 min 00:01:17.316 [Pipeline] { 00:01:17.325 [Pipeline] stage 00:01:17.327 [Pipeline] { (Tests) 00:01:17.339 [Pipeline] sh 00:01:17.627 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.627 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.627 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.627 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:17.627 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:17.627 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:17.627 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:17.627 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:17.627 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:17.627 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:17.627 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:17.627 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:17.627 + source /etc/os-release 00:01:17.627 ++ NAME='Fedora Linux' 00:01:17.627 ++ VERSION='39 (Cloud Edition)' 00:01:17.627 ++ ID=fedora 00:01:17.627 ++ VERSION_ID=39 00:01:17.627 ++ VERSION_CODENAME= 00:01:17.627 ++ PLATFORM_ID=platform:f39 00:01:17.627 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:17.627 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:17.627 ++ LOGO=fedora-logo-icon 00:01:17.627 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:17.627 ++ HOME_URL=https://fedoraproject.org/ 00:01:17.627 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:17.627 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:17.627 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:17.627 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:17.627 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:17.627 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:17.627 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:17.627 ++ SUPPORT_END=2024-11-12 00:01:17.627 ++ VARIANT='Cloud Edition' 00:01:17.627 ++ VARIANT_ID=cloud 00:01:17.627 + uname -a 00:01:17.627 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:17.627 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:18.565 Hugepages 00:01:18.565 node hugesize free / total 00:01:18.565 node0 1048576kB 0 / 0 00:01:18.565 node0 2048kB 0 / 0 00:01:18.565 node1 1048576kB 0 / 0 00:01:18.565 node1 2048kB 0 / 0 00:01:18.565 00:01:18.565 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:18.565 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:18.565 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:18.565 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:18.565 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:18.565 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:18.565 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:18.565 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:18.565 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:18.565 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:18.565 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:18.565 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:18.565 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:18.565 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:18.565 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:18.565 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:18.565 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:18.565 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:18.565 + rm -f /tmp/spdk-ld-path 00:01:18.565 + source autorun-spdk.conf 00:01:18.565 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.565 ++ SPDK_TEST_NVMF=1 00:01:18.565 ++ SPDK_TEST_NVME_CLI=1 00:01:18.565 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.565 ++ SPDK_TEST_NVMF_NICS=e810 00:01:18.565 ++ SPDK_TEST_VFIOUSER=1 00:01:18.565 ++ SPDK_RUN_UBSAN=1 00:01:18.565 ++ NET_TYPE=phy 00:01:18.565 ++ RUN_NIGHTLY=0 00:01:18.565 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:18.825 + [[ -n '' ]] 00:01:18.825 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:18.825 + for M in /var/spdk/build-*-manifest.txt 00:01:18.825 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:18.825 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:18.825 + for M in /var/spdk/build-*-manifest.txt 00:01:18.825 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:18.825 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:18.825 + for M in /var/spdk/build-*-manifest.txt 00:01:18.825 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:18.825 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:18.825 ++ uname 00:01:18.825 + [[ Linux == \L\i\n\u\x ]] 00:01:18.825 + sudo dmesg -T 00:01:18.825 + sudo dmesg --clear 00:01:18.825 + dmesg_pid=30090 00:01:18.825 + [[ Fedora Linux == FreeBSD ]] 00:01:18.825 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.825 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:18.825 + sudo dmesg -Tw 00:01:18.825 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:18.825 + [[ -x /usr/src/fio-static/fio ]] 00:01:18.825 + export FIO_BIN=/usr/src/fio-static/fio 00:01:18.825 + FIO_BIN=/usr/src/fio-static/fio 00:01:18.825 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:18.825 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:18.825 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:18.825 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.825 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:18.825 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:18.825 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.825 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:18.825 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:18.825 03:51:47 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:18.825 03:51:47 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:18.825 03:51:47 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.825 03:51:47 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:18.825 03:51:47 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:18.825 03:51:47 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:18.825 03:51:47 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:18.825 03:51:47 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:18.825 03:51:47 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:18.825 03:51:47 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:18.825 03:51:47 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:18.825 03:51:47 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:18.825 03:51:47 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:18.825 03:51:47 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:18.825 03:51:47 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:18.825 03:51:47 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:18.825 03:51:47 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:18.825 03:51:47 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:18.825 03:51:47 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:18.825 03:51:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.825 03:51:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.825 03:51:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.825 03:51:47 -- paths/export.sh@5 -- $ export PATH 00:01:18.825 03:51:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:18.825 03:51:47 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:18.825 03:51:47 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:18.825 03:51:47 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733712707.XXXXXX 00:01:18.825 03:51:47 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733712707.HHPJBi 00:01:18.825 03:51:47 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:18.825 03:51:47 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:18.825 03:51:47 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:18.825 03:51:47 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:18.825 03:51:47 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:18.825 03:51:47 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:18.825 03:51:47 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:18.825 03:51:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:18.825 03:51:47 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:18.825 03:51:47 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:18.825 03:51:47 -- pm/common@17 -- $ local monitor 00:01:18.825 03:51:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.825 03:51:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.825 03:51:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.825 03:51:47 -- pm/common@21 -- $ date +%s 00:01:18.825 03:51:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:18.825 03:51:47 -- pm/common@25 -- $ sleep 1 00:01:18.825 03:51:47 -- pm/common@21 -- $ date +%s 00:01:18.825 03:51:47 -- pm/common@21 -- $ date +%s 00:01:18.825 03:51:47 -- pm/common@21 -- $ date +%s 00:01:18.825 03:51:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733712707 00:01:18.825 03:51:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733712707 00:01:18.825 03:51:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733712707 00:01:18.826 03:51:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733712707 00:01:18.826 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733712707_collect-vmstat.pm.log 00:01:18.826 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733712707_collect-cpu-load.pm.log 00:01:18.826 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733712707_collect-cpu-temp.pm.log 00:01:18.826 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733712707_collect-bmc-pm.bmc.pm.log 00:01:20.204 03:51:48 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:20.204 03:51:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:20.204 03:51:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:20.204 03:51:48 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.204 03:51:48 -- spdk/autobuild.sh@16 -- $ date -u 00:01:20.204 Mon Dec 9 02:51:48 AM UTC 2024 00:01:20.204 03:51:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:20.204 v25.01-pre-316-gc4269c6e2 00:01:20.204 03:51:48 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:20.204 03:51:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:20.204 03:51:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:20.204 03:51:48 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:20.204 03:51:48 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:20.204 03:51:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:20.204 ************************************ 00:01:20.204 START TEST ubsan 00:01:20.204 ************************************ 00:01:20.204 03:51:48 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:20.204 using ubsan 00:01:20.204 00:01:20.204 real 0m0.000s 00:01:20.204 user 0m0.000s 00:01:20.204 sys 0m0.000s 00:01:20.204 03:51:48 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:20.204 03:51:48 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:20.204 ************************************ 00:01:20.204 END TEST ubsan 00:01:20.204 ************************************ 00:01:20.204 03:51:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:20.204 03:51:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:20.204 03:51:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:20.204 03:51:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:20.204 03:51:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:20.204 03:51:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:20.204 03:51:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:20.204 03:51:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:20.204 03:51:48 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:20.464 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:20.464 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:21.402 Using 'verbs' RDMA provider 00:01:34.563 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:44.541 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:44.541 Creating mk/config.mk...done. 00:01:44.541 Creating mk/cc.flags.mk...done. 00:01:44.541 Type 'make' to build. 00:01:44.541 03:52:13 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:01:44.541 03:52:13 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:44.541 03:52:13 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:44.541 03:52:13 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.541 ************************************ 00:01:44.541 START TEST make 00:01:44.541 ************************************ 00:01:44.541 03:52:13 make -- common/autotest_common.sh@1129 -- $ make -j48 00:01:44.801 make[1]: Nothing to be done for 'all'. 00:01:47.372 The Meson build system 00:01:47.372 Version: 1.5.0 00:01:47.372 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:47.372 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:47.372 Build type: native build 00:01:47.372 Project name: libvfio-user 00:01:47.372 Project version: 0.0.1 00:01:47.372 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:47.372 C linker for the host machine: cc ld.bfd 2.40-14 00:01:47.372 Host machine cpu family: x86_64 00:01:47.372 Host machine cpu: x86_64 00:01:47.372 Run-time dependency threads found: YES 00:01:47.372 Library dl found: YES 00:01:47.372 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:47.372 Run-time dependency json-c found: YES 0.17 00:01:47.372 Run-time dependency cmocka found: YES 1.1.7 00:01:47.372 Program pytest-3 found: NO 00:01:47.372 Program flake8 found: NO 00:01:47.372 Program misspell-fixer found: NO 00:01:47.372 Program restructuredtext-lint found: NO 00:01:47.372 Program valgrind found: YES (/usr/bin/valgrind) 00:01:47.372 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:47.372 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:47.372 Compiler for C supports arguments -Wwrite-strings: YES 00:01:47.372 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:47.372 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:47.372 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:47.372 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:47.372 Build targets in project: 8 00:01:47.372 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:47.372 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:47.372 00:01:47.372 libvfio-user 0.0.1 00:01:47.372 00:01:47.372 User defined options 00:01:47.372 buildtype : debug 00:01:47.372 default_library: shared 00:01:47.372 libdir : /usr/local/lib 00:01:47.372 00:01:47.372 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:48.323 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:48.323 [1/37] Compiling C object samples/null.p/null.c.o 00:01:48.589 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:48.589 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:48.589 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:48.589 [5/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:48.589 [6/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:48.589 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:48.589 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:48.589 [9/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:48.589 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:48.589 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:48.589 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:48.589 [13/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:48.589 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:48.589 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:48.589 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:48.589 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:48.589 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:48.589 [19/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:48.589 [20/37] Compiling C object samples/server.p/server.c.o 00:01:48.589 [21/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:48.589 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:48.589 [23/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:48.589 [24/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:48.589 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:48.589 [26/37] Compiling C object samples/client.p/client.c.o 00:01:48.589 [27/37] Linking target samples/client 00:01:48.851 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:48.851 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:48.851 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:48.851 [31/37] Linking target test/unit_tests 00:01:49.116 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:49.116 [33/37] Linking target samples/lspci 00:01:49.116 [34/37] Linking target samples/shadow_ioeventfd_server 00:01:49.116 [35/37] Linking target samples/gpio-pci-idio-16 00:01:49.116 [36/37] Linking target samples/server 00:01:49.116 [37/37] Linking target samples/null 00:01:49.116 INFO: autodetecting backend as ninja 00:01:49.116 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:49.382 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:49.964 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:49.964 ninja: no work to do. 00:01:54.159 The Meson build system 00:01:54.159 Version: 1.5.0 00:01:54.159 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:54.159 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:54.159 Build type: native build 00:01:54.159 Program cat found: YES (/usr/bin/cat) 00:01:54.159 Project name: DPDK 00:01:54.159 Project version: 24.03.0 00:01:54.159 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:54.159 C linker for the host machine: cc ld.bfd 2.40-14 00:01:54.159 Host machine cpu family: x86_64 00:01:54.159 Host machine cpu: x86_64 00:01:54.159 Message: ## Building in Developer Mode ## 00:01:54.159 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:54.159 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:54.159 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:54.159 Program python3 found: YES (/usr/bin/python3) 00:01:54.159 Program cat found: YES (/usr/bin/cat) 00:01:54.159 Compiler for C supports arguments -march=native: YES 00:01:54.159 Checking for size of "void *" : 8 00:01:54.159 Checking for size of "void *" : 8 (cached) 00:01:54.159 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:54.159 Library m found: YES 00:01:54.159 Library numa found: YES 00:01:54.159 Has header "numaif.h" : YES 00:01:54.159 Library fdt found: NO 00:01:54.159 Library execinfo found: NO 00:01:54.159 Has header "execinfo.h" : YES 00:01:54.159 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:54.159 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:54.159 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:54.159 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:54.159 Run-time dependency openssl found: YES 3.1.1 00:01:54.159 Run-time dependency libpcap found: YES 1.10.4 00:01:54.159 Has header "pcap.h" with dependency libpcap: YES 00:01:54.159 Compiler for C supports arguments -Wcast-qual: YES 00:01:54.159 Compiler for C supports arguments -Wdeprecated: YES 00:01:54.159 Compiler for C supports arguments -Wformat: YES 00:01:54.159 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:54.159 Compiler for C supports arguments -Wformat-security: NO 00:01:54.159 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:54.159 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:54.159 Compiler for C supports arguments -Wnested-externs: YES 00:01:54.159 Compiler for C supports arguments -Wold-style-definition: YES 00:01:54.159 Compiler for C supports arguments -Wpointer-arith: YES 00:01:54.159 Compiler for C supports arguments -Wsign-compare: YES 00:01:54.159 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:54.159 Compiler for C supports arguments -Wundef: YES 00:01:54.159 Compiler for C supports arguments -Wwrite-strings: YES 00:01:54.159 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:54.159 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:54.159 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:54.159 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:54.159 Program objdump found: YES (/usr/bin/objdump) 00:01:54.159 Compiler for C supports arguments -mavx512f: YES 00:01:54.159 Checking if "AVX512 checking" compiles: YES 00:01:54.159 Fetching value of define "__SSE4_2__" : 1 00:01:54.159 Fetching value of define "__AES__" : 1 00:01:54.159 Fetching value of define "__AVX__" : 1 00:01:54.159 Fetching value of define "__AVX2__" : (undefined) 00:01:54.159 Fetching value of define "__AVX512BW__" : (undefined) 00:01:54.159 Fetching value of define "__AVX512CD__" : (undefined) 00:01:54.159 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:54.159 Fetching value of define "__AVX512F__" : (undefined) 00:01:54.159 Fetching value of define "__AVX512VL__" : (undefined) 00:01:54.159 Fetching value of define "__PCLMUL__" : 1 00:01:54.159 Fetching value of define "__RDRND__" : 1 00:01:54.159 Fetching value of define "__RDSEED__" : (undefined) 00:01:54.159 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:54.159 Fetching value of define "__znver1__" : (undefined) 00:01:54.159 Fetching value of define "__znver2__" : (undefined) 00:01:54.159 Fetching value of define "__znver3__" : (undefined) 00:01:54.159 Fetching value of define "__znver4__" : (undefined) 00:01:54.159 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:54.159 Message: lib/log: Defining dependency "log" 00:01:54.159 Message: lib/kvargs: Defining dependency "kvargs" 00:01:54.159 Message: lib/telemetry: Defining dependency "telemetry" 00:01:54.159 Checking for function "getentropy" : NO 00:01:54.159 Message: lib/eal: Defining dependency "eal" 00:01:54.159 Message: lib/ring: Defining dependency "ring" 00:01:54.159 Message: lib/rcu: Defining dependency "rcu" 00:01:54.159 Message: lib/mempool: Defining dependency "mempool" 00:01:54.159 Message: lib/mbuf: Defining dependency "mbuf" 00:01:54.159 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:54.159 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:54.159 Compiler for C supports arguments -mpclmul: YES 00:01:54.159 Compiler for C supports arguments -maes: YES 00:01:54.159 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:54.159 Compiler for C supports arguments -mavx512bw: YES 00:01:54.159 Compiler for C supports arguments -mavx512dq: YES 00:01:54.159 Compiler for C supports arguments -mavx512vl: YES 00:01:54.159 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:54.159 Compiler for C supports arguments -mavx2: YES 00:01:54.159 Compiler for C supports arguments -mavx: YES 00:01:54.159 Message: lib/net: Defining dependency "net" 00:01:54.160 Message: lib/meter: Defining dependency "meter" 00:01:54.160 Message: lib/ethdev: Defining dependency "ethdev" 00:01:54.160 Message: lib/pci: Defining dependency "pci" 00:01:54.160 Message: lib/cmdline: Defining dependency "cmdline" 00:01:54.160 Message: lib/hash: Defining dependency "hash" 00:01:54.160 Message: lib/timer: Defining dependency "timer" 00:01:54.160 Message: lib/compressdev: Defining dependency "compressdev" 00:01:54.160 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:54.160 Message: lib/dmadev: Defining dependency "dmadev" 00:01:54.160 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:54.160 Message: lib/power: Defining dependency "power" 00:01:54.160 Message: lib/reorder: Defining dependency "reorder" 00:01:54.160 Message: lib/security: Defining dependency "security" 00:01:54.160 Has header "linux/userfaultfd.h" : YES 00:01:54.160 Has header "linux/vduse.h" : YES 00:01:54.160 Message: lib/vhost: Defining dependency "vhost" 00:01:54.160 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:54.160 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:54.160 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:54.160 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:54.160 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:54.160 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:54.160 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:54.160 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:54.160 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:54.160 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:54.160 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:54.160 Configuring doxy-api-html.conf using configuration 00:01:54.160 Configuring doxy-api-man.conf using configuration 00:01:54.160 Program mandb found: YES (/usr/bin/mandb) 00:01:54.160 Program sphinx-build found: NO 00:01:54.160 Configuring rte_build_config.h using configuration 00:01:54.160 Message: 00:01:54.160 ================= 00:01:54.160 Applications Enabled 00:01:54.160 ================= 00:01:54.160 00:01:54.160 apps: 00:01:54.160 00:01:54.160 00:01:54.160 Message: 00:01:54.160 ================= 00:01:54.160 Libraries Enabled 00:01:54.160 ================= 00:01:54.160 00:01:54.160 libs: 00:01:54.160 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:54.160 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:54.160 cryptodev, dmadev, power, reorder, security, vhost, 00:01:54.160 00:01:54.160 Message: 00:01:54.160 =============== 00:01:54.160 Drivers Enabled 00:01:54.160 =============== 00:01:54.160 00:01:54.160 common: 00:01:54.160 00:01:54.160 bus: 00:01:54.160 pci, vdev, 00:01:54.160 mempool: 00:01:54.160 ring, 00:01:54.160 dma: 00:01:54.160 00:01:54.160 net: 00:01:54.160 00:01:54.160 crypto: 00:01:54.160 00:01:54.160 compress: 00:01:54.160 00:01:54.160 vdpa: 00:01:54.160 00:01:54.160 00:01:54.160 Message: 00:01:54.160 ================= 00:01:54.160 Content Skipped 00:01:54.160 ================= 00:01:54.160 00:01:54.160 apps: 00:01:54.160 dumpcap: explicitly disabled via build config 00:01:54.160 graph: explicitly disabled via build config 00:01:54.160 pdump: explicitly disabled via build config 00:01:54.160 proc-info: explicitly disabled via build config 00:01:54.160 test-acl: explicitly disabled via build config 00:01:54.160 test-bbdev: explicitly disabled via build config 00:01:54.160 test-cmdline: explicitly disabled via build config 00:01:54.160 test-compress-perf: explicitly disabled via build config 00:01:54.160 test-crypto-perf: explicitly disabled via build config 00:01:54.160 test-dma-perf: explicitly disabled via build config 00:01:54.160 test-eventdev: explicitly disabled via build config 00:01:54.160 test-fib: explicitly disabled via build config 00:01:54.160 test-flow-perf: explicitly disabled via build config 00:01:54.160 test-gpudev: explicitly disabled via build config 00:01:54.160 test-mldev: explicitly disabled via build config 00:01:54.160 test-pipeline: explicitly disabled via build config 00:01:54.160 test-pmd: explicitly disabled via build config 00:01:54.160 test-regex: explicitly disabled via build config 00:01:54.160 test-sad: explicitly disabled via build config 00:01:54.160 test-security-perf: explicitly disabled via build config 00:01:54.160 00:01:54.160 libs: 00:01:54.160 argparse: explicitly disabled via build config 00:01:54.160 metrics: explicitly disabled via build config 00:01:54.160 acl: explicitly disabled via build config 00:01:54.160 bbdev: explicitly disabled via build config 00:01:54.160 bitratestats: explicitly disabled via build config 00:01:54.160 bpf: explicitly disabled via build config 00:01:54.160 cfgfile: explicitly disabled via build config 00:01:54.160 distributor: explicitly disabled via build config 00:01:54.160 efd: explicitly disabled via build config 00:01:54.160 eventdev: explicitly disabled via build config 00:01:54.160 dispatcher: explicitly disabled via build config 00:01:54.160 gpudev: explicitly disabled via build config 00:01:54.160 gro: explicitly disabled via build config 00:01:54.160 gso: explicitly disabled via build config 00:01:54.160 ip_frag: explicitly disabled via build config 00:01:54.160 jobstats: explicitly disabled via build config 00:01:54.160 latencystats: explicitly disabled via build config 00:01:54.160 lpm: explicitly disabled via build config 00:01:54.160 member: explicitly disabled via build config 00:01:54.160 pcapng: explicitly disabled via build config 00:01:54.160 rawdev: explicitly disabled via build config 00:01:54.160 regexdev: explicitly disabled via build config 00:01:54.160 mldev: explicitly disabled via build config 00:01:54.160 rib: explicitly disabled via build config 00:01:54.160 sched: explicitly disabled via build config 00:01:54.160 stack: explicitly disabled via build config 00:01:54.160 ipsec: explicitly disabled via build config 00:01:54.160 pdcp: explicitly disabled via build config 00:01:54.160 fib: explicitly disabled via build config 00:01:54.160 port: explicitly disabled via build config 00:01:54.160 pdump: explicitly disabled via build config 00:01:54.160 table: explicitly disabled via build config 00:01:54.160 pipeline: explicitly disabled via build config 00:01:54.160 graph: explicitly disabled via build config 00:01:54.160 node: explicitly disabled via build config 00:01:54.160 00:01:54.160 drivers: 00:01:54.160 common/cpt: not in enabled drivers build config 00:01:54.160 common/dpaax: not in enabled drivers build config 00:01:54.160 common/iavf: not in enabled drivers build config 00:01:54.160 common/idpf: not in enabled drivers build config 00:01:54.160 common/ionic: not in enabled drivers build config 00:01:54.160 common/mvep: not in enabled drivers build config 00:01:54.160 common/octeontx: not in enabled drivers build config 00:01:54.160 bus/auxiliary: not in enabled drivers build config 00:01:54.160 bus/cdx: not in enabled drivers build config 00:01:54.160 bus/dpaa: not in enabled drivers build config 00:01:54.160 bus/fslmc: not in enabled drivers build config 00:01:54.160 bus/ifpga: not in enabled drivers build config 00:01:54.160 bus/platform: not in enabled drivers build config 00:01:54.160 bus/uacce: not in enabled drivers build config 00:01:54.160 bus/vmbus: not in enabled drivers build config 00:01:54.160 common/cnxk: not in enabled drivers build config 00:01:54.160 common/mlx5: not in enabled drivers build config 00:01:54.160 common/nfp: not in enabled drivers build config 00:01:54.160 common/nitrox: not in enabled drivers build config 00:01:54.160 common/qat: not in enabled drivers build config 00:01:54.160 common/sfc_efx: not in enabled drivers build config 00:01:54.160 mempool/bucket: not in enabled drivers build config 00:01:54.160 mempool/cnxk: not in enabled drivers build config 00:01:54.160 mempool/dpaa: not in enabled drivers build config 00:01:54.160 mempool/dpaa2: not in enabled drivers build config 00:01:54.160 mempool/octeontx: not in enabled drivers build config 00:01:54.160 mempool/stack: not in enabled drivers build config 00:01:54.160 dma/cnxk: not in enabled drivers build config 00:01:54.160 dma/dpaa: not in enabled drivers build config 00:01:54.160 dma/dpaa2: not in enabled drivers build config 00:01:54.160 dma/hisilicon: not in enabled drivers build config 00:01:54.160 dma/idxd: not in enabled drivers build config 00:01:54.160 dma/ioat: not in enabled drivers build config 00:01:54.160 dma/skeleton: not in enabled drivers build config 00:01:54.160 net/af_packet: not in enabled drivers build config 00:01:54.160 net/af_xdp: not in enabled drivers build config 00:01:54.160 net/ark: not in enabled drivers build config 00:01:54.160 net/atlantic: not in enabled drivers build config 00:01:54.160 net/avp: not in enabled drivers build config 00:01:54.160 net/axgbe: not in enabled drivers build config 00:01:54.160 net/bnx2x: not in enabled drivers build config 00:01:54.160 net/bnxt: not in enabled drivers build config 00:01:54.160 net/bonding: not in enabled drivers build config 00:01:54.160 net/cnxk: not in enabled drivers build config 00:01:54.160 net/cpfl: not in enabled drivers build config 00:01:54.160 net/cxgbe: not in enabled drivers build config 00:01:54.160 net/dpaa: not in enabled drivers build config 00:01:54.160 net/dpaa2: not in enabled drivers build config 00:01:54.160 net/e1000: not in enabled drivers build config 00:01:54.160 net/ena: not in enabled drivers build config 00:01:54.160 net/enetc: not in enabled drivers build config 00:01:54.160 net/enetfec: not in enabled drivers build config 00:01:54.160 net/enic: not in enabled drivers build config 00:01:54.160 net/failsafe: not in enabled drivers build config 00:01:54.160 net/fm10k: not in enabled drivers build config 00:01:54.160 net/gve: not in enabled drivers build config 00:01:54.160 net/hinic: not in enabled drivers build config 00:01:54.160 net/hns3: not in enabled drivers build config 00:01:54.160 net/i40e: not in enabled drivers build config 00:01:54.160 net/iavf: not in enabled drivers build config 00:01:54.160 net/ice: not in enabled drivers build config 00:01:54.160 net/idpf: not in enabled drivers build config 00:01:54.160 net/igc: not in enabled drivers build config 00:01:54.160 net/ionic: not in enabled drivers build config 00:01:54.160 net/ipn3ke: not in enabled drivers build config 00:01:54.161 net/ixgbe: not in enabled drivers build config 00:01:54.161 net/mana: not in enabled drivers build config 00:01:54.161 net/memif: not in enabled drivers build config 00:01:54.161 net/mlx4: not in enabled drivers build config 00:01:54.161 net/mlx5: not in enabled drivers build config 00:01:54.161 net/mvneta: not in enabled drivers build config 00:01:54.161 net/mvpp2: not in enabled drivers build config 00:01:54.161 net/netvsc: not in enabled drivers build config 00:01:54.161 net/nfb: not in enabled drivers build config 00:01:54.161 net/nfp: not in enabled drivers build config 00:01:54.161 net/ngbe: not in enabled drivers build config 00:01:54.161 net/null: not in enabled drivers build config 00:01:54.161 net/octeontx: not in enabled drivers build config 00:01:54.161 net/octeon_ep: not in enabled drivers build config 00:01:54.161 net/pcap: not in enabled drivers build config 00:01:54.161 net/pfe: not in enabled drivers build config 00:01:54.161 net/qede: not in enabled drivers build config 00:01:54.161 net/ring: not in enabled drivers build config 00:01:54.161 net/sfc: not in enabled drivers build config 00:01:54.161 net/softnic: not in enabled drivers build config 00:01:54.161 net/tap: not in enabled drivers build config 00:01:54.161 net/thunderx: not in enabled drivers build config 00:01:54.161 net/txgbe: not in enabled drivers build config 00:01:54.161 net/vdev_netvsc: not in enabled drivers build config 00:01:54.161 net/vhost: not in enabled drivers build config 00:01:54.161 net/virtio: not in enabled drivers build config 00:01:54.161 net/vmxnet3: not in enabled drivers build config 00:01:54.161 raw/*: missing internal dependency, "rawdev" 00:01:54.161 crypto/armv8: not in enabled drivers build config 00:01:54.161 crypto/bcmfs: not in enabled drivers build config 00:01:54.161 crypto/caam_jr: not in enabled drivers build config 00:01:54.161 crypto/ccp: not in enabled drivers build config 00:01:54.161 crypto/cnxk: not in enabled drivers build config 00:01:54.161 crypto/dpaa_sec: not in enabled drivers build config 00:01:54.161 crypto/dpaa2_sec: not in enabled drivers build config 00:01:54.161 crypto/ipsec_mb: not in enabled drivers build config 00:01:54.161 crypto/mlx5: not in enabled drivers build config 00:01:54.161 crypto/mvsam: not in enabled drivers build config 00:01:54.161 crypto/nitrox: not in enabled drivers build config 00:01:54.161 crypto/null: not in enabled drivers build config 00:01:54.161 crypto/octeontx: not in enabled drivers build config 00:01:54.161 crypto/openssl: not in enabled drivers build config 00:01:54.161 crypto/scheduler: not in enabled drivers build config 00:01:54.161 crypto/uadk: not in enabled drivers build config 00:01:54.161 crypto/virtio: not in enabled drivers build config 00:01:54.161 compress/isal: not in enabled drivers build config 00:01:54.161 compress/mlx5: not in enabled drivers build config 00:01:54.161 compress/nitrox: not in enabled drivers build config 00:01:54.161 compress/octeontx: not in enabled drivers build config 00:01:54.161 compress/zlib: not in enabled drivers build config 00:01:54.161 regex/*: missing internal dependency, "regexdev" 00:01:54.161 ml/*: missing internal dependency, "mldev" 00:01:54.161 vdpa/ifc: not in enabled drivers build config 00:01:54.161 vdpa/mlx5: not in enabled drivers build config 00:01:54.161 vdpa/nfp: not in enabled drivers build config 00:01:54.161 vdpa/sfc: not in enabled drivers build config 00:01:54.161 event/*: missing internal dependency, "eventdev" 00:01:54.161 baseband/*: missing internal dependency, "bbdev" 00:01:54.161 gpu/*: missing internal dependency, "gpudev" 00:01:54.161 00:01:54.161 00:01:54.421 Build targets in project: 85 00:01:54.421 00:01:54.421 DPDK 24.03.0 00:01:54.421 00:01:54.421 User defined options 00:01:54.421 buildtype : debug 00:01:54.421 default_library : shared 00:01:54.421 libdir : lib 00:01:54.421 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:54.421 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:54.421 c_link_args : 00:01:54.421 cpu_instruction_set: native 00:01:54.421 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:01:54.421 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:01:54.421 enable_docs : false 00:01:54.421 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:54.421 enable_kmods : false 00:01:54.421 max_lcores : 128 00:01:54.421 tests : false 00:01:54.421 00:01:54.421 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:54.998 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:54.998 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:54.998 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:54.998 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:54.998 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:54.998 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:54.998 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:54.998 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:54.998 [8/268] Linking static target lib/librte_kvargs.a 00:01:54.998 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:54.998 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:54.998 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:54.998 [12/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:54.998 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:54.998 [14/268] Linking static target lib/librte_log.a 00:01:54.998 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:55.261 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:55.838 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.838 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:55.838 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:55.838 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:55.838 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:55.838 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:55.838 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:55.838 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:55.838 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:55.838 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:55.838 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:55.838 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:55.838 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:55.838 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:55.838 [31/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:55.838 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:55.838 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:55.838 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:55.838 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:55.838 [36/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:55.838 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:55.838 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:55.838 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:55.838 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:55.838 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:55.838 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:55.838 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:55.838 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:55.838 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:56.104 [46/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:56.104 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:56.104 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:56.104 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:56.104 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:56.104 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:56.104 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:56.105 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:56.105 [54/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:56.105 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:56.105 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:56.105 [57/268] Linking static target lib/librte_telemetry.a 00:01:56.105 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:56.105 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:56.105 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:56.105 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:56.105 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:56.105 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:56.374 [64/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.374 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:56.374 [66/268] Linking target lib/librte_log.so.24.1 00:01:56.374 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:56.635 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:56.635 [69/268] Linking static target lib/librte_pci.a 00:01:56.635 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:56.635 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:56.635 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:56.635 [73/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:56.901 [74/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:56.901 [75/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:56.901 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:56.901 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:56.901 [78/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:56.901 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:56.901 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:56.901 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:56.901 [82/268] Linking target lib/librte_kvargs.so.24.1 00:01:56.901 [83/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:56.901 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:56.901 [85/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:56.901 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:56.901 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:56.901 [88/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:56.901 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:56.901 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:56.901 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:56.901 [92/268] Linking static target lib/librte_meter.a 00:01:56.901 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:56.901 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:56.902 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:56.902 [96/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:56.902 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:56.902 [98/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:56.902 [99/268] Linking static target lib/librte_ring.a 00:01:56.902 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:56.902 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:56.902 [102/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:57.166 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:57.166 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:57.166 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:57.166 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:57.166 [107/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.166 [108/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.167 [109/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:57.167 [110/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:57.167 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:57.167 [112/268] Linking static target lib/librte_rcu.a 00:01:57.167 [113/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:57.167 [114/268] Linking static target lib/librte_mempool.a 00:01:57.167 [115/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:57.167 [116/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:57.167 [117/268] Linking target lib/librte_telemetry.so.24.1 00:01:57.167 [118/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:57.167 [119/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:57.167 [120/268] Linking static target lib/librte_eal.a 00:01:57.167 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:57.167 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:57.167 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:57.167 [124/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:57.167 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:57.432 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:57.432 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:57.432 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:57.432 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:57.432 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:57.432 [131/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:57.432 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:57.432 [133/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:57.432 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:57.432 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:57.432 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:57.432 [137/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.696 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:57.696 [139/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:57.696 [140/268] Linking static target lib/librte_net.a 00:01:57.696 [141/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:57.696 [142/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.696 [143/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.696 [144/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:57.956 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:57.956 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:57.956 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:57.956 [148/268] Linking static target lib/librte_cmdline.a 00:01:57.956 [149/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:57.956 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:57.956 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:57.956 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:57.956 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:57.956 [154/268] Linking static target lib/librte_timer.a 00:01:57.956 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:58.214 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:58.214 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:58.214 [158/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:58.214 [159/268] Linking static target lib/librte_dmadev.a 00:01:58.214 [160/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.214 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:58.214 [162/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:58.214 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:58.214 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:58.214 [165/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:58.214 [166/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:58.214 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:58.214 [168/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.214 [169/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:58.472 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:58.472 [171/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.472 [172/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:58.472 [173/268] Linking static target lib/librte_compressdev.a 00:01:58.472 [174/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:58.472 [175/268] Linking static target lib/librte_power.a 00:01:58.472 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:58.472 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:58.472 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:58.472 [179/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:58.472 [180/268] Linking static target lib/librte_hash.a 00:01:58.472 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:58.472 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:58.472 [183/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:58.472 [184/268] Linking static target lib/librte_reorder.a 00:01:58.472 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:58.730 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:58.730 [187/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:58.730 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:58.730 [189/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.730 [190/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:58.730 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:58.730 [192/268] Linking static target lib/librte_mbuf.a 00:01:58.730 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:58.730 [194/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:58.730 [195/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:58.730 [196/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.730 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:58.989 [198/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.989 [199/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:58.989 [200/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:58.989 [201/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.989 [202/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:58.989 [203/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:58.989 [204/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:58.989 [205/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:58.989 [206/268] Linking static target drivers/librte_bus_vdev.a 00:01:58.989 [207/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:58.989 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:58.989 [209/268] Linking static target drivers/librte_bus_pci.a 00:01:58.989 [210/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.989 [211/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.989 [212/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:58.989 [213/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:58.989 [214/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:58.989 [215/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:58.989 [216/268] Linking static target drivers/librte_mempool_ring.a 00:01:59.247 [217/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.247 [218/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:59.247 [219/268] Linking static target lib/librte_security.a 00:01:59.247 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.247 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:59.247 [222/268] Linking static target lib/librte_ethdev.a 00:01:59.247 [223/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:59.247 [224/268] Linking static target lib/librte_cryptodev.a 00:01:59.505 [225/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.505 [226/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.440 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.814 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:03.188 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.188 [230/268] Linking target lib/librte_eal.so.24.1 00:02:03.447 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.447 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:03.447 [233/268] Linking target lib/librte_ring.so.24.1 00:02:03.447 [234/268] Linking target lib/librte_meter.so.24.1 00:02:03.447 [235/268] Linking target lib/librte_pci.so.24.1 00:02:03.447 [236/268] Linking target lib/librte_timer.so.24.1 00:02:03.447 [237/268] Linking target lib/librte_dmadev.so.24.1 00:02:03.447 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:03.447 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:03.447 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:03.447 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:03.447 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:03.447 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:03.705 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:03.705 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:03.705 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:03.705 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:03.705 [248/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:03.705 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:03.705 [250/268] Linking target lib/librte_mbuf.so.24.1 00:02:03.964 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:03.964 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:03.964 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:03.964 [254/268] Linking target lib/librte_net.so.24.1 00:02:03.964 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:03.964 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:03.964 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:03.964 [258/268] Linking target lib/librte_hash.so.24.1 00:02:03.964 [259/268] Linking target lib/librte_cmdline.so.24.1 00:02:03.964 [260/268] Linking target lib/librte_security.so.24.1 00:02:04.221 [261/268] Linking target lib/librte_ethdev.so.24.1 00:02:04.221 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:04.221 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:04.221 [264/268] Linking target lib/librte_power.so.24.1 00:02:08.406 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:08.406 [266/268] Linking static target lib/librte_vhost.a 00:02:08.673 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.673 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:08.673 INFO: autodetecting backend as ninja 00:02:08.673 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:30.601 CC lib/ut_mock/mock.o 00:02:30.601 CC lib/ut/ut.o 00:02:30.601 CC lib/log/log.o 00:02:30.601 CC lib/log/log_flags.o 00:02:30.601 CC lib/log/log_deprecated.o 00:02:30.601 LIB libspdk_ut.a 00:02:30.601 LIB libspdk_ut_mock.a 00:02:30.601 LIB libspdk_log.a 00:02:30.601 SO libspdk_ut.so.2.0 00:02:30.601 SO libspdk_ut_mock.so.6.0 00:02:30.601 SO libspdk_log.so.7.1 00:02:30.601 SYMLINK libspdk_ut_mock.so 00:02:30.601 SYMLINK libspdk_ut.so 00:02:30.601 SYMLINK libspdk_log.so 00:02:30.601 CC lib/ioat/ioat.o 00:02:30.601 CC lib/dma/dma.o 00:02:30.601 CXX lib/trace_parser/trace.o 00:02:30.601 CC lib/util/base64.o 00:02:30.602 CC lib/util/bit_array.o 00:02:30.602 CC lib/util/cpuset.o 00:02:30.602 CC lib/util/crc16.o 00:02:30.602 CC lib/util/crc32.o 00:02:30.602 CC lib/util/crc32c.o 00:02:30.602 CC lib/util/crc32_ieee.o 00:02:30.602 CC lib/util/crc64.o 00:02:30.602 CC lib/util/dif.o 00:02:30.602 CC lib/util/fd.o 00:02:30.602 CC lib/util/fd_group.o 00:02:30.602 CC lib/util/file.o 00:02:30.602 CC lib/util/hexlify.o 00:02:30.602 CC lib/util/iov.o 00:02:30.602 CC lib/util/math.o 00:02:30.602 CC lib/util/net.o 00:02:30.602 CC lib/util/pipe.o 00:02:30.602 CC lib/util/string.o 00:02:30.602 CC lib/util/strerror_tls.o 00:02:30.602 CC lib/util/uuid.o 00:02:30.602 CC lib/util/xor.o 00:02:30.602 CC lib/util/zipf.o 00:02:30.602 CC lib/util/md5.o 00:02:30.602 CC lib/vfio_user/host/vfio_user.o 00:02:30.602 CC lib/vfio_user/host/vfio_user_pci.o 00:02:30.602 LIB libspdk_dma.a 00:02:30.602 SO libspdk_dma.so.5.0 00:02:30.602 SYMLINK libspdk_dma.so 00:02:30.602 LIB libspdk_ioat.a 00:02:30.602 SO libspdk_ioat.so.7.0 00:02:30.602 SYMLINK libspdk_ioat.so 00:02:30.602 LIB libspdk_vfio_user.a 00:02:30.602 SO libspdk_vfio_user.so.5.0 00:02:30.602 SYMLINK libspdk_vfio_user.so 00:02:30.602 LIB libspdk_util.a 00:02:30.602 SO libspdk_util.so.10.1 00:02:30.602 SYMLINK libspdk_util.so 00:02:30.602 CC lib/conf/conf.o 00:02:30.602 CC lib/rdma_utils/rdma_utils.o 00:02:30.602 CC lib/json/json_parse.o 00:02:30.602 CC lib/idxd/idxd.o 00:02:30.602 CC lib/json/json_util.o 00:02:30.602 CC lib/env_dpdk/env.o 00:02:30.602 CC lib/json/json_write.o 00:02:30.602 CC lib/idxd/idxd_user.o 00:02:30.602 CC lib/vmd/vmd.o 00:02:30.602 CC lib/env_dpdk/memory.o 00:02:30.602 CC lib/idxd/idxd_kernel.o 00:02:30.602 CC lib/vmd/led.o 00:02:30.602 CC lib/env_dpdk/pci.o 00:02:30.602 CC lib/env_dpdk/init.o 00:02:30.602 CC lib/env_dpdk/threads.o 00:02:30.602 CC lib/env_dpdk/pci_ioat.o 00:02:30.602 CC lib/env_dpdk/pci_virtio.o 00:02:30.602 CC lib/env_dpdk/pci_vmd.o 00:02:30.602 CC lib/env_dpdk/pci_idxd.o 00:02:30.602 CC lib/env_dpdk/pci_event.o 00:02:30.602 CC lib/env_dpdk/sigbus_handler.o 00:02:30.602 CC lib/env_dpdk/pci_dpdk.o 00:02:30.602 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:30.602 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:30.602 LIB libspdk_conf.a 00:02:30.602 SO libspdk_conf.so.6.0 00:02:30.602 LIB libspdk_rdma_utils.a 00:02:30.602 LIB libspdk_json.a 00:02:30.602 SYMLINK libspdk_conf.so 00:02:30.602 SO libspdk_rdma_utils.so.1.0 00:02:30.602 SO libspdk_json.so.6.0 00:02:30.602 SYMLINK libspdk_rdma_utils.so 00:02:30.602 SYMLINK libspdk_json.so 00:02:30.602 CC lib/rdma_provider/common.o 00:02:30.602 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:30.602 CC lib/jsonrpc/jsonrpc_server.o 00:02:30.602 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:30.602 CC lib/jsonrpc/jsonrpc_client.o 00:02:30.602 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:30.602 LIB libspdk_idxd.a 00:02:30.602 SO libspdk_idxd.so.12.1 00:02:30.602 SYMLINK libspdk_idxd.so 00:02:30.602 LIB libspdk_vmd.a 00:02:30.602 LIB libspdk_rdma_provider.a 00:02:30.602 SO libspdk_vmd.so.6.0 00:02:30.602 SO libspdk_rdma_provider.so.7.0 00:02:30.602 LIB libspdk_jsonrpc.a 00:02:30.602 SYMLINK libspdk_vmd.so 00:02:30.602 SO libspdk_jsonrpc.so.6.0 00:02:30.602 SYMLINK libspdk_rdma_provider.so 00:02:30.602 SYMLINK libspdk_jsonrpc.so 00:02:30.602 LIB libspdk_trace_parser.a 00:02:30.602 SO libspdk_trace_parser.so.6.0 00:02:30.602 SYMLINK libspdk_trace_parser.so 00:02:30.602 CC lib/rpc/rpc.o 00:02:30.860 LIB libspdk_rpc.a 00:02:30.860 SO libspdk_rpc.so.6.0 00:02:30.860 SYMLINK libspdk_rpc.so 00:02:31.118 CC lib/keyring/keyring.o 00:02:31.118 CC lib/keyring/keyring_rpc.o 00:02:31.118 CC lib/trace/trace.o 00:02:31.118 CC lib/notify/notify.o 00:02:31.118 CC lib/trace/trace_flags.o 00:02:31.118 CC lib/notify/notify_rpc.o 00:02:31.118 CC lib/trace/trace_rpc.o 00:02:31.377 LIB libspdk_notify.a 00:02:31.377 SO libspdk_notify.so.6.0 00:02:31.377 SYMLINK libspdk_notify.so 00:02:31.377 LIB libspdk_keyring.a 00:02:31.377 LIB libspdk_trace.a 00:02:31.377 SO libspdk_keyring.so.2.0 00:02:31.377 SO libspdk_trace.so.11.0 00:02:31.377 SYMLINK libspdk_keyring.so 00:02:31.377 SYMLINK libspdk_trace.so 00:02:31.635 CC lib/thread/thread.o 00:02:31.635 CC lib/thread/iobuf.o 00:02:31.635 CC lib/sock/sock.o 00:02:31.635 CC lib/sock/sock_rpc.o 00:02:31.635 LIB libspdk_env_dpdk.a 00:02:31.635 SO libspdk_env_dpdk.so.15.1 00:02:31.893 SYMLINK libspdk_env_dpdk.so 00:02:32.152 LIB libspdk_sock.a 00:02:32.152 SO libspdk_sock.so.10.0 00:02:32.152 SYMLINK libspdk_sock.so 00:02:32.152 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:32.152 CC lib/nvme/nvme_ctrlr.o 00:02:32.411 CC lib/nvme/nvme_fabric.o 00:02:32.411 CC lib/nvme/nvme_ns_cmd.o 00:02:32.411 CC lib/nvme/nvme_ns.o 00:02:32.411 CC lib/nvme/nvme_pcie_common.o 00:02:32.411 CC lib/nvme/nvme_pcie.o 00:02:32.411 CC lib/nvme/nvme_qpair.o 00:02:32.411 CC lib/nvme/nvme.o 00:02:32.411 CC lib/nvme/nvme_quirks.o 00:02:32.411 CC lib/nvme/nvme_transport.o 00:02:32.411 CC lib/nvme/nvme_discovery.o 00:02:32.411 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:32.411 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:32.411 CC lib/nvme/nvme_tcp.o 00:02:32.411 CC lib/nvme/nvme_opal.o 00:02:32.411 CC lib/nvme/nvme_io_msg.o 00:02:32.411 CC lib/nvme/nvme_poll_group.o 00:02:32.411 CC lib/nvme/nvme_zns.o 00:02:32.411 CC lib/nvme/nvme_stubs.o 00:02:32.411 CC lib/nvme/nvme_auth.o 00:02:32.411 CC lib/nvme/nvme_cuse.o 00:02:32.411 CC lib/nvme/nvme_vfio_user.o 00:02:32.411 CC lib/nvme/nvme_rdma.o 00:02:33.347 LIB libspdk_thread.a 00:02:33.347 SO libspdk_thread.so.11.0 00:02:33.347 SYMLINK libspdk_thread.so 00:02:33.605 CC lib/accel/accel.o 00:02:33.605 CC lib/accel/accel_rpc.o 00:02:33.605 CC lib/fsdev/fsdev.o 00:02:33.605 CC lib/fsdev/fsdev_io.o 00:02:33.605 CC lib/blob/blobstore.o 00:02:33.605 CC lib/accel/accel_sw.o 00:02:33.605 CC lib/fsdev/fsdev_rpc.o 00:02:33.605 CC lib/blob/request.o 00:02:33.605 CC lib/vfu_tgt/tgt_endpoint.o 00:02:33.605 CC lib/virtio/virtio.o 00:02:33.605 CC lib/blob/zeroes.o 00:02:33.605 CC lib/virtio/virtio_vhost_user.o 00:02:33.605 CC lib/blob/blob_bs_dev.o 00:02:33.605 CC lib/vfu_tgt/tgt_rpc.o 00:02:33.605 CC lib/init/json_config.o 00:02:33.605 CC lib/virtio/virtio_vfio_user.o 00:02:33.605 CC lib/init/subsystem.o 00:02:33.605 CC lib/virtio/virtio_pci.o 00:02:33.605 CC lib/init/subsystem_rpc.o 00:02:33.605 CC lib/init/rpc.o 00:02:33.863 LIB libspdk_init.a 00:02:33.863 SO libspdk_init.so.6.0 00:02:33.863 LIB libspdk_virtio.a 00:02:33.863 SYMLINK libspdk_init.so 00:02:33.863 LIB libspdk_vfu_tgt.a 00:02:34.121 SO libspdk_vfu_tgt.so.3.0 00:02:34.121 SO libspdk_virtio.so.7.0 00:02:34.121 SYMLINK libspdk_vfu_tgt.so 00:02:34.121 SYMLINK libspdk_virtio.so 00:02:34.121 CC lib/event/app.o 00:02:34.121 CC lib/event/reactor.o 00:02:34.121 CC lib/event/log_rpc.o 00:02:34.121 CC lib/event/app_rpc.o 00:02:34.121 CC lib/event/scheduler_static.o 00:02:34.379 LIB libspdk_fsdev.a 00:02:34.379 SO libspdk_fsdev.so.2.0 00:02:34.379 SYMLINK libspdk_fsdev.so 00:02:34.638 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:34.638 LIB libspdk_event.a 00:02:34.638 SO libspdk_event.so.14.0 00:02:34.638 SYMLINK libspdk_event.so 00:02:34.638 LIB libspdk_nvme.a 00:02:34.638 LIB libspdk_accel.a 00:02:34.897 SO libspdk_accel.so.16.0 00:02:34.897 SYMLINK libspdk_accel.so 00:02:34.897 SO libspdk_nvme.so.15.0 00:02:34.897 CC lib/bdev/bdev.o 00:02:34.897 CC lib/bdev/bdev_rpc.o 00:02:34.897 CC lib/bdev/bdev_zone.o 00:02:34.897 CC lib/bdev/part.o 00:02:34.897 CC lib/bdev/scsi_nvme.o 00:02:35.156 SYMLINK libspdk_nvme.so 00:02:35.157 LIB libspdk_fuse_dispatcher.a 00:02:35.157 SO libspdk_fuse_dispatcher.so.1.0 00:02:35.416 SYMLINK libspdk_fuse_dispatcher.so 00:02:36.794 LIB libspdk_blob.a 00:02:36.794 SO libspdk_blob.so.12.0 00:02:36.794 SYMLINK libspdk_blob.so 00:02:37.051 CC lib/blobfs/blobfs.o 00:02:37.051 CC lib/blobfs/tree.o 00:02:37.051 CC lib/lvol/lvol.o 00:02:37.617 LIB libspdk_bdev.a 00:02:37.876 SO libspdk_bdev.so.17.0 00:02:37.876 SYMLINK libspdk_bdev.so 00:02:37.876 LIB libspdk_blobfs.a 00:02:37.876 SO libspdk_blobfs.so.11.0 00:02:37.876 SYMLINK libspdk_blobfs.so 00:02:37.876 LIB libspdk_lvol.a 00:02:37.876 SO libspdk_lvol.so.11.0 00:02:37.876 CC lib/ublk/ublk.o 00:02:37.876 CC lib/nbd/nbd.o 00:02:37.876 CC lib/ublk/ublk_rpc.o 00:02:37.876 CC lib/nbd/nbd_rpc.o 00:02:37.876 CC lib/scsi/dev.o 00:02:37.876 CC lib/scsi/lun.o 00:02:37.876 CC lib/scsi/port.o 00:02:37.876 CC lib/nvmf/ctrlr.o 00:02:37.876 CC lib/nvmf/ctrlr_discovery.o 00:02:37.876 CC lib/scsi/scsi.o 00:02:37.876 CC lib/ftl/ftl_core.o 00:02:37.876 CC lib/nvmf/ctrlr_bdev.o 00:02:37.876 CC lib/scsi/scsi_bdev.o 00:02:37.876 CC lib/ftl/ftl_init.o 00:02:37.876 CC lib/scsi/scsi_pr.o 00:02:37.876 CC lib/nvmf/subsystem.o 00:02:37.876 CC lib/ftl/ftl_layout.o 00:02:37.876 CC lib/nvmf/nvmf.o 00:02:37.876 CC lib/scsi/scsi_rpc.o 00:02:37.876 CC lib/ftl/ftl_debug.o 00:02:37.876 CC lib/nvmf/nvmf_rpc.o 00:02:37.876 CC lib/ftl/ftl_io.o 00:02:37.876 CC lib/scsi/task.o 00:02:37.876 CC lib/nvmf/transport.o 00:02:37.876 CC lib/ftl/ftl_l2p.o 00:02:37.876 CC lib/nvmf/tcp.o 00:02:37.876 CC lib/ftl/ftl_sb.o 00:02:37.876 CC lib/nvmf/stubs.o 00:02:37.876 CC lib/ftl/ftl_l2p_flat.o 00:02:37.876 CC lib/nvmf/mdns_server.o 00:02:37.876 CC lib/ftl/ftl_nv_cache.o 00:02:37.876 CC lib/nvmf/vfio_user.o 00:02:37.876 CC lib/ftl/ftl_band.o 00:02:37.876 CC lib/nvmf/rdma.o 00:02:37.876 CC lib/ftl/ftl_band_ops.o 00:02:37.876 CC lib/nvmf/auth.o 00:02:37.876 CC lib/ftl/ftl_rq.o 00:02:37.876 CC lib/ftl/ftl_writer.o 00:02:37.876 CC lib/ftl/ftl_reloc.o 00:02:37.876 CC lib/ftl/ftl_l2p_cache.o 00:02:38.142 CC lib/ftl/ftl_p2l.o 00:02:38.142 CC lib/ftl/ftl_p2l_log.o 00:02:38.142 CC lib/ftl/mngt/ftl_mngt.o 00:02:38.142 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:38.142 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:38.142 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:38.142 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:38.142 SYMLINK libspdk_lvol.so 00:02:38.142 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:38.404 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:38.404 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:38.404 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:38.404 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:38.405 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:38.405 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:38.405 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:38.405 CC lib/ftl/utils/ftl_conf.o 00:02:38.405 CC lib/ftl/utils/ftl_md.o 00:02:38.405 CC lib/ftl/utils/ftl_mempool.o 00:02:38.405 CC lib/ftl/utils/ftl_bitmap.o 00:02:38.405 CC lib/ftl/utils/ftl_property.o 00:02:38.405 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:38.405 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:38.405 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:38.671 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:38.671 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:38.671 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:38.671 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:38.671 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:38.671 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:38.671 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:38.671 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:38.671 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:38.671 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:38.671 CC lib/ftl/base/ftl_base_dev.o 00:02:38.671 CC lib/ftl/base/ftl_base_bdev.o 00:02:38.671 CC lib/ftl/ftl_trace.o 00:02:38.931 LIB libspdk_nbd.a 00:02:38.931 SO libspdk_nbd.so.7.0 00:02:38.931 LIB libspdk_scsi.a 00:02:38.931 SYMLINK libspdk_nbd.so 00:02:38.931 SO libspdk_scsi.so.9.0 00:02:39.190 SYMLINK libspdk_scsi.so 00:02:39.190 LIB libspdk_ublk.a 00:02:39.190 SO libspdk_ublk.so.3.0 00:02:39.190 SYMLINK libspdk_ublk.so 00:02:39.190 CC lib/iscsi/conn.o 00:02:39.190 CC lib/vhost/vhost.o 00:02:39.190 CC lib/iscsi/init_grp.o 00:02:39.190 CC lib/iscsi/iscsi.o 00:02:39.190 CC lib/vhost/vhost_rpc.o 00:02:39.190 CC lib/iscsi/param.o 00:02:39.190 CC lib/vhost/vhost_scsi.o 00:02:39.190 CC lib/vhost/vhost_blk.o 00:02:39.190 CC lib/iscsi/portal_grp.o 00:02:39.190 CC lib/iscsi/tgt_node.o 00:02:39.190 CC lib/vhost/rte_vhost_user.o 00:02:39.190 CC lib/iscsi/iscsi_subsystem.o 00:02:39.190 CC lib/iscsi/iscsi_rpc.o 00:02:39.190 CC lib/iscsi/task.o 00:02:39.449 LIB libspdk_ftl.a 00:02:39.708 SO libspdk_ftl.so.9.0 00:02:39.966 SYMLINK libspdk_ftl.so 00:02:40.532 LIB libspdk_vhost.a 00:02:40.532 SO libspdk_vhost.so.8.0 00:02:40.532 SYMLINK libspdk_vhost.so 00:02:40.791 LIB libspdk_iscsi.a 00:02:40.791 LIB libspdk_nvmf.a 00:02:40.791 SO libspdk_iscsi.so.8.0 00:02:40.791 SO libspdk_nvmf.so.20.0 00:02:40.791 SYMLINK libspdk_iscsi.so 00:02:41.049 SYMLINK libspdk_nvmf.so 00:02:41.307 CC module/env_dpdk/env_dpdk_rpc.o 00:02:41.307 CC module/vfu_device/vfu_virtio.o 00:02:41.307 CC module/vfu_device/vfu_virtio_blk.o 00:02:41.307 CC module/vfu_device/vfu_virtio_scsi.o 00:02:41.307 CC module/vfu_device/vfu_virtio_rpc.o 00:02:41.307 CC module/vfu_device/vfu_virtio_fs.o 00:02:41.307 CC module/accel/error/accel_error.o 00:02:41.307 CC module/blob/bdev/blob_bdev.o 00:02:41.307 CC module/accel/ioat/accel_ioat.o 00:02:41.307 CC module/accel/error/accel_error_rpc.o 00:02:41.307 CC module/keyring/linux/keyring.o 00:02:41.307 CC module/accel/ioat/accel_ioat_rpc.o 00:02:41.307 CC module/accel/dsa/accel_dsa.o 00:02:41.307 CC module/keyring/file/keyring.o 00:02:41.307 CC module/keyring/linux/keyring_rpc.o 00:02:41.307 CC module/keyring/file/keyring_rpc.o 00:02:41.307 CC module/accel/dsa/accel_dsa_rpc.o 00:02:41.307 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:41.307 CC module/sock/posix/posix.o 00:02:41.307 CC module/fsdev/aio/fsdev_aio.o 00:02:41.307 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:41.307 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:41.307 CC module/fsdev/aio/linux_aio_mgr.o 00:02:41.307 CC module/scheduler/gscheduler/gscheduler.o 00:02:41.307 CC module/accel/iaa/accel_iaa_rpc.o 00:02:41.307 CC module/accel/iaa/accel_iaa.o 00:02:41.307 LIB libspdk_env_dpdk_rpc.a 00:02:41.307 SO libspdk_env_dpdk_rpc.so.6.0 00:02:41.565 SYMLINK libspdk_env_dpdk_rpc.so 00:02:41.565 LIB libspdk_keyring_file.a 00:02:41.565 LIB libspdk_scheduler_gscheduler.a 00:02:41.565 LIB libspdk_scheduler_dpdk_governor.a 00:02:41.565 SO libspdk_keyring_file.so.2.0 00:02:41.565 SO libspdk_scheduler_gscheduler.so.4.0 00:02:41.565 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:41.565 LIB libspdk_accel_error.a 00:02:41.565 LIB libspdk_keyring_linux.a 00:02:41.565 SYMLINK libspdk_scheduler_gscheduler.so 00:02:41.565 SYMLINK libspdk_keyring_file.so 00:02:41.565 SO libspdk_accel_error.so.2.0 00:02:41.565 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:41.565 SO libspdk_keyring_linux.so.1.0 00:02:41.565 LIB libspdk_accel_ioat.a 00:02:41.565 LIB libspdk_blob_bdev.a 00:02:41.565 SYMLINK libspdk_accel_error.so 00:02:41.565 LIB libspdk_scheduler_dynamic.a 00:02:41.565 LIB libspdk_accel_iaa.a 00:02:41.565 SO libspdk_blob_bdev.so.12.0 00:02:41.565 LIB libspdk_accel_dsa.a 00:02:41.565 SO libspdk_accel_ioat.so.6.0 00:02:41.565 SYMLINK libspdk_keyring_linux.so 00:02:41.565 SO libspdk_scheduler_dynamic.so.4.0 00:02:41.824 SO libspdk_accel_dsa.so.5.0 00:02:41.824 SO libspdk_accel_iaa.so.3.0 00:02:41.824 SYMLINK libspdk_blob_bdev.so 00:02:41.824 SYMLINK libspdk_accel_ioat.so 00:02:41.824 SYMLINK libspdk_scheduler_dynamic.so 00:02:41.824 SYMLINK libspdk_accel_iaa.so 00:02:41.824 SYMLINK libspdk_accel_dsa.so 00:02:41.824 LIB libspdk_vfu_device.a 00:02:42.140 SO libspdk_vfu_device.so.3.0 00:02:42.140 CC module/bdev/gpt/gpt.o 00:02:42.140 CC module/bdev/gpt/vbdev_gpt.o 00:02:42.140 CC module/bdev/delay/vbdev_delay.o 00:02:42.140 CC module/blobfs/bdev/blobfs_bdev.o 00:02:42.140 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:42.140 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:42.140 CC module/bdev/malloc/bdev_malloc.o 00:02:42.140 CC module/bdev/nvme/bdev_nvme.o 00:02:42.140 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:42.140 CC module/bdev/passthru/vbdev_passthru.o 00:02:42.140 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:42.140 CC module/bdev/error/vbdev_error.o 00:02:42.140 CC module/bdev/error/vbdev_error_rpc.o 00:02:42.140 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:42.140 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:42.140 CC module/bdev/aio/bdev_aio.o 00:02:42.140 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:42.140 CC module/bdev/nvme/nvme_rpc.o 00:02:42.140 CC module/bdev/aio/bdev_aio_rpc.o 00:02:42.140 CC module/bdev/nvme/bdev_mdns_client.o 00:02:42.140 CC module/bdev/split/vbdev_split.o 00:02:42.140 CC module/bdev/lvol/vbdev_lvol.o 00:02:42.140 CC module/bdev/nvme/vbdev_opal.o 00:02:42.140 CC module/bdev/null/bdev_null.o 00:02:42.140 CC module/bdev/raid/bdev_raid.o 00:02:42.140 CC module/bdev/null/bdev_null_rpc.o 00:02:42.140 CC module/bdev/raid/bdev_raid_rpc.o 00:02:42.140 CC module/bdev/split/vbdev_split_rpc.o 00:02:42.140 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:42.140 CC module/bdev/raid/bdev_raid_sb.o 00:02:42.140 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:42.140 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:42.140 CC module/bdev/raid/raid0.o 00:02:42.140 CC module/bdev/iscsi/bdev_iscsi.o 00:02:42.140 CC module/bdev/raid/raid1.o 00:02:42.140 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:42.140 CC module/bdev/raid/concat.o 00:02:42.140 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:42.140 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:42.140 CC module/bdev/ftl/bdev_ftl.o 00:02:42.140 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:42.140 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:42.140 SYMLINK libspdk_vfu_device.so 00:02:42.140 LIB libspdk_fsdev_aio.a 00:02:42.140 SO libspdk_fsdev_aio.so.1.0 00:02:42.398 LIB libspdk_sock_posix.a 00:02:42.398 SYMLINK libspdk_fsdev_aio.so 00:02:42.398 SO libspdk_sock_posix.so.6.0 00:02:42.398 LIB libspdk_blobfs_bdev.a 00:02:42.398 SYMLINK libspdk_sock_posix.so 00:02:42.398 SO libspdk_blobfs_bdev.so.6.0 00:02:42.398 LIB libspdk_bdev_split.a 00:02:42.398 SO libspdk_bdev_split.so.6.0 00:02:42.398 LIB libspdk_bdev_iscsi.a 00:02:42.398 LIB libspdk_bdev_gpt.a 00:02:42.398 LIB libspdk_bdev_error.a 00:02:42.398 SYMLINK libspdk_blobfs_bdev.so 00:02:42.398 SO libspdk_bdev_gpt.so.6.0 00:02:42.398 LIB libspdk_bdev_ftl.a 00:02:42.398 SO libspdk_bdev_iscsi.so.6.0 00:02:42.398 SO libspdk_bdev_error.so.6.0 00:02:42.398 LIB libspdk_bdev_null.a 00:02:42.398 SYMLINK libspdk_bdev_split.so 00:02:42.398 LIB libspdk_bdev_passthru.a 00:02:42.665 SO libspdk_bdev_ftl.so.6.0 00:02:42.665 SO libspdk_bdev_null.so.6.0 00:02:42.665 SO libspdk_bdev_passthru.so.6.0 00:02:42.665 SYMLINK libspdk_bdev_gpt.so 00:02:42.665 SYMLINK libspdk_bdev_iscsi.so 00:02:42.665 SYMLINK libspdk_bdev_error.so 00:02:42.665 LIB libspdk_bdev_aio.a 00:02:42.665 LIB libspdk_bdev_zone_block.a 00:02:42.665 SYMLINK libspdk_bdev_ftl.so 00:02:42.665 LIB libspdk_bdev_malloc.a 00:02:42.665 SO libspdk_bdev_aio.so.6.0 00:02:42.665 SYMLINK libspdk_bdev_null.so 00:02:42.665 SYMLINK libspdk_bdev_passthru.so 00:02:42.665 SO libspdk_bdev_zone_block.so.6.0 00:02:42.665 SO libspdk_bdev_malloc.so.6.0 00:02:42.665 LIB libspdk_bdev_delay.a 00:02:42.665 SYMLINK libspdk_bdev_aio.so 00:02:42.665 SO libspdk_bdev_delay.so.6.0 00:02:42.665 SYMLINK libspdk_bdev_zone_block.so 00:02:42.665 SYMLINK libspdk_bdev_malloc.so 00:02:42.665 SYMLINK libspdk_bdev_delay.so 00:02:42.665 LIB libspdk_bdev_lvol.a 00:02:42.665 LIB libspdk_bdev_virtio.a 00:02:42.665 SO libspdk_bdev_lvol.so.6.0 00:02:42.665 SO libspdk_bdev_virtio.so.6.0 00:02:42.923 SYMLINK libspdk_bdev_lvol.so 00:02:42.923 SYMLINK libspdk_bdev_virtio.so 00:02:43.180 LIB libspdk_bdev_raid.a 00:02:43.180 SO libspdk_bdev_raid.so.6.0 00:02:43.437 SYMLINK libspdk_bdev_raid.so 00:02:44.815 LIB libspdk_bdev_nvme.a 00:02:44.815 SO libspdk_bdev_nvme.so.7.1 00:02:44.815 SYMLINK libspdk_bdev_nvme.so 00:02:45.383 CC module/event/subsystems/iobuf/iobuf.o 00:02:45.383 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:45.383 CC module/event/subsystems/vmd/vmd.o 00:02:45.383 CC module/event/subsystems/keyring/keyring.o 00:02:45.383 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:45.383 CC module/event/subsystems/fsdev/fsdev.o 00:02:45.383 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:45.383 CC module/event/subsystems/scheduler/scheduler.o 00:02:45.383 CC module/event/subsystems/sock/sock.o 00:02:45.383 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:45.383 LIB libspdk_event_keyring.a 00:02:45.383 LIB libspdk_event_vhost_blk.a 00:02:45.383 LIB libspdk_event_vmd.a 00:02:45.383 LIB libspdk_event_fsdev.a 00:02:45.383 LIB libspdk_event_vfu_tgt.a 00:02:45.383 LIB libspdk_event_scheduler.a 00:02:45.383 LIB libspdk_event_sock.a 00:02:45.383 SO libspdk_event_keyring.so.1.0 00:02:45.383 LIB libspdk_event_iobuf.a 00:02:45.383 SO libspdk_event_vhost_blk.so.3.0 00:02:45.383 SO libspdk_event_fsdev.so.1.0 00:02:45.383 SO libspdk_event_vfu_tgt.so.3.0 00:02:45.383 SO libspdk_event_vmd.so.6.0 00:02:45.383 SO libspdk_event_sock.so.5.0 00:02:45.383 SO libspdk_event_scheduler.so.4.0 00:02:45.383 SO libspdk_event_iobuf.so.3.0 00:02:45.383 SYMLINK libspdk_event_keyring.so 00:02:45.642 SYMLINK libspdk_event_vhost_blk.so 00:02:45.642 SYMLINK libspdk_event_fsdev.so 00:02:45.642 SYMLINK libspdk_event_vfu_tgt.so 00:02:45.642 SYMLINK libspdk_event_sock.so 00:02:45.642 SYMLINK libspdk_event_scheduler.so 00:02:45.642 SYMLINK libspdk_event_vmd.so 00:02:45.642 SYMLINK libspdk_event_iobuf.so 00:02:45.642 CC module/event/subsystems/accel/accel.o 00:02:45.901 LIB libspdk_event_accel.a 00:02:45.901 SO libspdk_event_accel.so.6.0 00:02:45.901 SYMLINK libspdk_event_accel.so 00:02:46.162 CC module/event/subsystems/bdev/bdev.o 00:02:46.162 LIB libspdk_event_bdev.a 00:02:46.421 SO libspdk_event_bdev.so.6.0 00:02:46.421 SYMLINK libspdk_event_bdev.so 00:02:46.421 CC module/event/subsystems/nbd/nbd.o 00:02:46.421 CC module/event/subsystems/scsi/scsi.o 00:02:46.421 CC module/event/subsystems/ublk/ublk.o 00:02:46.421 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:46.421 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:46.679 LIB libspdk_event_nbd.a 00:02:46.679 LIB libspdk_event_ublk.a 00:02:46.679 LIB libspdk_event_scsi.a 00:02:46.679 SO libspdk_event_nbd.so.6.0 00:02:46.679 SO libspdk_event_ublk.so.3.0 00:02:46.679 SO libspdk_event_scsi.so.6.0 00:02:46.679 SYMLINK libspdk_event_nbd.so 00:02:46.679 SYMLINK libspdk_event_ublk.so 00:02:46.679 SYMLINK libspdk_event_scsi.so 00:02:46.679 LIB libspdk_event_nvmf.a 00:02:46.679 SO libspdk_event_nvmf.so.6.0 00:02:46.937 SYMLINK libspdk_event_nvmf.so 00:02:46.937 CC module/event/subsystems/iscsi/iscsi.o 00:02:46.937 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:47.195 LIB libspdk_event_vhost_scsi.a 00:02:47.195 LIB libspdk_event_iscsi.a 00:02:47.195 SO libspdk_event_vhost_scsi.so.3.0 00:02:47.195 SO libspdk_event_iscsi.so.6.0 00:02:47.195 SYMLINK libspdk_event_vhost_scsi.so 00:02:47.195 SYMLINK libspdk_event_iscsi.so 00:02:47.195 SO libspdk.so.6.0 00:02:47.195 SYMLINK libspdk.so 00:02:47.458 CC app/trace_record/trace_record.o 00:02:47.458 CXX app/trace/trace.o 00:02:47.458 CC app/spdk_lspci/spdk_lspci.o 00:02:47.458 CC test/rpc_client/rpc_client_test.o 00:02:47.458 CC app/spdk_nvme_discover/discovery_aer.o 00:02:47.458 CC app/spdk_top/spdk_top.o 00:02:47.458 CC app/spdk_nvme_identify/identify.o 00:02:47.458 CC app/spdk_nvme_perf/perf.o 00:02:47.458 TEST_HEADER include/spdk/accel.h 00:02:47.458 TEST_HEADER include/spdk/accel_module.h 00:02:47.458 TEST_HEADER include/spdk/assert.h 00:02:47.458 TEST_HEADER include/spdk/barrier.h 00:02:47.458 TEST_HEADER include/spdk/base64.h 00:02:47.458 TEST_HEADER include/spdk/bdev.h 00:02:47.458 TEST_HEADER include/spdk/bdev_module.h 00:02:47.458 TEST_HEADER include/spdk/bdev_zone.h 00:02:47.458 TEST_HEADER include/spdk/bit_array.h 00:02:47.458 TEST_HEADER include/spdk/bit_pool.h 00:02:47.458 TEST_HEADER include/spdk/blob_bdev.h 00:02:47.458 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:47.458 TEST_HEADER include/spdk/blobfs.h 00:02:47.459 TEST_HEADER include/spdk/blob.h 00:02:47.459 TEST_HEADER include/spdk/conf.h 00:02:47.459 TEST_HEADER include/spdk/cpuset.h 00:02:47.459 TEST_HEADER include/spdk/config.h 00:02:47.459 TEST_HEADER include/spdk/crc16.h 00:02:47.459 TEST_HEADER include/spdk/crc32.h 00:02:47.459 TEST_HEADER include/spdk/crc64.h 00:02:47.459 TEST_HEADER include/spdk/dif.h 00:02:47.459 TEST_HEADER include/spdk/dma.h 00:02:47.459 TEST_HEADER include/spdk/endian.h 00:02:47.459 TEST_HEADER include/spdk/env_dpdk.h 00:02:47.459 TEST_HEADER include/spdk/env.h 00:02:47.459 TEST_HEADER include/spdk/event.h 00:02:47.459 TEST_HEADER include/spdk/fd_group.h 00:02:47.459 TEST_HEADER include/spdk/fd.h 00:02:47.459 TEST_HEADER include/spdk/file.h 00:02:47.459 TEST_HEADER include/spdk/fsdev_module.h 00:02:47.459 TEST_HEADER include/spdk/fsdev.h 00:02:47.459 TEST_HEADER include/spdk/ftl.h 00:02:47.459 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:47.459 TEST_HEADER include/spdk/gpt_spec.h 00:02:47.459 TEST_HEADER include/spdk/hexlify.h 00:02:47.459 TEST_HEADER include/spdk/histogram_data.h 00:02:47.459 TEST_HEADER include/spdk/idxd.h 00:02:47.459 TEST_HEADER include/spdk/idxd_spec.h 00:02:47.459 TEST_HEADER include/spdk/init.h 00:02:47.459 TEST_HEADER include/spdk/ioat.h 00:02:47.459 TEST_HEADER include/spdk/iscsi_spec.h 00:02:47.459 TEST_HEADER include/spdk/ioat_spec.h 00:02:47.459 TEST_HEADER include/spdk/json.h 00:02:47.459 TEST_HEADER include/spdk/jsonrpc.h 00:02:47.459 TEST_HEADER include/spdk/keyring_module.h 00:02:47.459 TEST_HEADER include/spdk/keyring.h 00:02:47.459 TEST_HEADER include/spdk/likely.h 00:02:47.459 TEST_HEADER include/spdk/log.h 00:02:47.459 TEST_HEADER include/spdk/md5.h 00:02:47.459 TEST_HEADER include/spdk/lvol.h 00:02:47.459 TEST_HEADER include/spdk/memory.h 00:02:47.459 TEST_HEADER include/spdk/mmio.h 00:02:47.459 TEST_HEADER include/spdk/nbd.h 00:02:47.459 TEST_HEADER include/spdk/net.h 00:02:47.459 TEST_HEADER include/spdk/notify.h 00:02:47.459 TEST_HEADER include/spdk/nvme.h 00:02:47.459 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:47.459 TEST_HEADER include/spdk/nvme_intel.h 00:02:47.459 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:47.459 TEST_HEADER include/spdk/nvme_zns.h 00:02:47.459 TEST_HEADER include/spdk/nvme_spec.h 00:02:47.459 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:47.459 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:47.459 TEST_HEADER include/spdk/nvmf.h 00:02:47.459 TEST_HEADER include/spdk/nvmf_spec.h 00:02:47.459 TEST_HEADER include/spdk/nvmf_transport.h 00:02:47.459 TEST_HEADER include/spdk/opal.h 00:02:47.459 TEST_HEADER include/spdk/pci_ids.h 00:02:47.459 TEST_HEADER include/spdk/opal_spec.h 00:02:47.459 TEST_HEADER include/spdk/pipe.h 00:02:47.459 TEST_HEADER include/spdk/queue.h 00:02:47.459 TEST_HEADER include/spdk/reduce.h 00:02:47.459 TEST_HEADER include/spdk/scheduler.h 00:02:47.459 TEST_HEADER include/spdk/rpc.h 00:02:47.459 TEST_HEADER include/spdk/scsi.h 00:02:47.459 TEST_HEADER include/spdk/scsi_spec.h 00:02:47.459 TEST_HEADER include/spdk/sock.h 00:02:47.459 TEST_HEADER include/spdk/stdinc.h 00:02:47.459 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:47.459 TEST_HEADER include/spdk/string.h 00:02:47.459 TEST_HEADER include/spdk/thread.h 00:02:47.459 TEST_HEADER include/spdk/trace.h 00:02:47.459 TEST_HEADER include/spdk/trace_parser.h 00:02:47.459 TEST_HEADER include/spdk/ublk.h 00:02:47.459 TEST_HEADER include/spdk/tree.h 00:02:47.459 TEST_HEADER include/spdk/util.h 00:02:47.459 TEST_HEADER include/spdk/uuid.h 00:02:47.459 TEST_HEADER include/spdk/version.h 00:02:47.459 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:47.459 CC app/spdk_dd/spdk_dd.o 00:02:47.459 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:47.459 TEST_HEADER include/spdk/vhost.h 00:02:47.459 TEST_HEADER include/spdk/xor.h 00:02:47.459 TEST_HEADER include/spdk/vmd.h 00:02:47.459 TEST_HEADER include/spdk/zipf.h 00:02:47.459 CXX test/cpp_headers/accel.o 00:02:47.459 CXX test/cpp_headers/accel_module.o 00:02:47.459 CXX test/cpp_headers/assert.o 00:02:47.459 CXX test/cpp_headers/barrier.o 00:02:47.459 CXX test/cpp_headers/base64.o 00:02:47.459 CXX test/cpp_headers/bdev.o 00:02:47.459 CXX test/cpp_headers/bdev_module.o 00:02:47.459 CXX test/cpp_headers/bdev_zone.o 00:02:47.459 CXX test/cpp_headers/bit_array.o 00:02:47.459 CXX test/cpp_headers/bit_pool.o 00:02:47.459 CXX test/cpp_headers/blob_bdev.o 00:02:47.459 CXX test/cpp_headers/blobfs_bdev.o 00:02:47.459 CXX test/cpp_headers/blobfs.o 00:02:47.459 CXX test/cpp_headers/blob.o 00:02:47.459 CC app/iscsi_tgt/iscsi_tgt.o 00:02:47.459 CXX test/cpp_headers/conf.o 00:02:47.459 CXX test/cpp_headers/config.o 00:02:47.459 CXX test/cpp_headers/cpuset.o 00:02:47.459 CXX test/cpp_headers/crc16.o 00:02:47.459 CC app/nvmf_tgt/nvmf_main.o 00:02:47.726 CXX test/cpp_headers/crc32.o 00:02:47.726 CC examples/ioat/verify/verify.o 00:02:47.726 CC app/spdk_tgt/spdk_tgt.o 00:02:47.726 CC test/app/histogram_perf/histogram_perf.o 00:02:47.726 CC examples/util/zipf/zipf.o 00:02:47.726 CC examples/ioat/perf/perf.o 00:02:47.726 CC test/env/vtophys/vtophys.o 00:02:47.726 CC test/app/jsoncat/jsoncat.o 00:02:47.726 CC test/app/stub/stub.o 00:02:47.726 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:47.726 CC test/env/pci/pci_ut.o 00:02:47.726 CC test/env/memory/memory_ut.o 00:02:47.726 CC test/thread/poller_perf/poller_perf.o 00:02:47.726 CC app/fio/nvme/fio_plugin.o 00:02:47.726 CC test/dma/test_dma/test_dma.o 00:02:47.726 CC test/app/bdev_svc/bdev_svc.o 00:02:47.726 CC app/fio/bdev/fio_plugin.o 00:02:47.726 LINK spdk_lspci 00:02:47.726 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:47.987 CC test/env/mem_callbacks/mem_callbacks.o 00:02:47.987 LINK rpc_client_test 00:02:47.987 LINK spdk_nvme_discover 00:02:47.987 LINK jsoncat 00:02:47.987 LINK histogram_perf 00:02:47.987 LINK interrupt_tgt 00:02:47.987 LINK zipf 00:02:47.987 LINK poller_perf 00:02:47.987 LINK vtophys 00:02:47.987 CXX test/cpp_headers/crc64.o 00:02:47.987 CXX test/cpp_headers/dif.o 00:02:47.987 CXX test/cpp_headers/dma.o 00:02:47.987 CXX test/cpp_headers/endian.o 00:02:47.987 CXX test/cpp_headers/env_dpdk.o 00:02:47.987 CXX test/cpp_headers/env.o 00:02:47.987 LINK nvmf_tgt 00:02:47.987 CXX test/cpp_headers/event.o 00:02:47.987 CXX test/cpp_headers/fd_group.o 00:02:47.987 CXX test/cpp_headers/fd.o 00:02:47.987 CXX test/cpp_headers/file.o 00:02:47.987 LINK env_dpdk_post_init 00:02:47.987 CXX test/cpp_headers/fsdev.o 00:02:47.987 LINK iscsi_tgt 00:02:47.987 LINK spdk_trace_record 00:02:47.987 LINK stub 00:02:47.987 CXX test/cpp_headers/fsdev_module.o 00:02:47.987 CXX test/cpp_headers/ftl.o 00:02:47.987 CXX test/cpp_headers/fuse_dispatcher.o 00:02:47.987 LINK verify 00:02:47.987 CXX test/cpp_headers/gpt_spec.o 00:02:47.987 LINK ioat_perf 00:02:47.987 LINK bdev_svc 00:02:48.252 LINK spdk_tgt 00:02:48.252 CXX test/cpp_headers/hexlify.o 00:02:48.252 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:48.252 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:48.252 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:48.252 CXX test/cpp_headers/histogram_data.o 00:02:48.252 CXX test/cpp_headers/idxd.o 00:02:48.252 CXX test/cpp_headers/idxd_spec.o 00:02:48.252 CXX test/cpp_headers/init.o 00:02:48.252 LINK spdk_dd 00:02:48.252 CXX test/cpp_headers/ioat.o 00:02:48.252 CXX test/cpp_headers/ioat_spec.o 00:02:48.252 CXX test/cpp_headers/iscsi_spec.o 00:02:48.516 CXX test/cpp_headers/json.o 00:02:48.516 LINK spdk_trace 00:02:48.516 CXX test/cpp_headers/jsonrpc.o 00:02:48.516 CXX test/cpp_headers/keyring.o 00:02:48.516 CXX test/cpp_headers/keyring_module.o 00:02:48.516 CXX test/cpp_headers/likely.o 00:02:48.516 CXX test/cpp_headers/log.o 00:02:48.516 CXX test/cpp_headers/lvol.o 00:02:48.516 CXX test/cpp_headers/md5.o 00:02:48.516 CXX test/cpp_headers/memory.o 00:02:48.516 CXX test/cpp_headers/mmio.o 00:02:48.516 CXX test/cpp_headers/nbd.o 00:02:48.516 CXX test/cpp_headers/net.o 00:02:48.516 CXX test/cpp_headers/notify.o 00:02:48.516 CXX test/cpp_headers/nvme.o 00:02:48.516 CXX test/cpp_headers/nvme_intel.o 00:02:48.516 CXX test/cpp_headers/nvme_ocssd.o 00:02:48.516 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:48.516 LINK pci_ut 00:02:48.516 CXX test/cpp_headers/nvme_spec.o 00:02:48.516 CXX test/cpp_headers/nvme_zns.o 00:02:48.516 CXX test/cpp_headers/nvmf_cmd.o 00:02:48.779 CC test/event/event_perf/event_perf.o 00:02:48.779 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:48.779 CC test/event/reactor_perf/reactor_perf.o 00:02:48.779 LINK nvme_fuzz 00:02:48.779 CC test/event/reactor/reactor.o 00:02:48.779 CXX test/cpp_headers/nvmf.o 00:02:48.779 CXX test/cpp_headers/nvmf_spec.o 00:02:48.779 CXX test/cpp_headers/nvmf_transport.o 00:02:48.779 CXX test/cpp_headers/opal.o 00:02:48.779 CXX test/cpp_headers/opal_spec.o 00:02:48.779 LINK test_dma 00:02:48.779 CC examples/sock/hello_world/hello_sock.o 00:02:48.779 CC test/event/app_repeat/app_repeat.o 00:02:48.779 CC examples/idxd/perf/perf.o 00:02:48.779 CC examples/vmd/lsvmd/lsvmd.o 00:02:48.779 CC examples/thread/thread/thread_ex.o 00:02:48.779 CXX test/cpp_headers/pci_ids.o 00:02:48.779 CC test/event/scheduler/scheduler.o 00:02:48.779 CXX test/cpp_headers/pipe.o 00:02:48.779 CC examples/vmd/led/led.o 00:02:48.779 CXX test/cpp_headers/queue.o 00:02:48.779 CXX test/cpp_headers/reduce.o 00:02:48.779 CXX test/cpp_headers/rpc.o 00:02:48.779 CXX test/cpp_headers/scheduler.o 00:02:48.779 CXX test/cpp_headers/scsi.o 00:02:48.779 CXX test/cpp_headers/scsi_spec.o 00:02:48.779 CXX test/cpp_headers/sock.o 00:02:48.779 CXX test/cpp_headers/stdinc.o 00:02:48.779 CXX test/cpp_headers/string.o 00:02:48.779 CXX test/cpp_headers/thread.o 00:02:48.779 CXX test/cpp_headers/trace.o 00:02:49.040 LINK spdk_bdev 00:02:49.040 CXX test/cpp_headers/trace_parser.o 00:02:49.040 CXX test/cpp_headers/tree.o 00:02:49.040 CXX test/cpp_headers/ublk.o 00:02:49.040 CXX test/cpp_headers/util.o 00:02:49.040 CXX test/cpp_headers/uuid.o 00:02:49.040 LINK spdk_nvme 00:02:49.040 LINK reactor_perf 00:02:49.040 LINK reactor 00:02:49.040 CXX test/cpp_headers/version.o 00:02:49.040 LINK event_perf 00:02:49.040 LINK vhost_fuzz 00:02:49.040 CXX test/cpp_headers/vfio_user_pci.o 00:02:49.040 LINK lsvmd 00:02:49.040 CXX test/cpp_headers/vfio_user_spec.o 00:02:49.040 CXX test/cpp_headers/vhost.o 00:02:49.040 CC app/vhost/vhost.o 00:02:49.040 LINK spdk_nvme_perf 00:02:49.040 CXX test/cpp_headers/vmd.o 00:02:49.040 LINK app_repeat 00:02:49.040 CXX test/cpp_headers/xor.o 00:02:49.040 LINK mem_callbacks 00:02:49.040 CXX test/cpp_headers/zipf.o 00:02:49.040 LINK spdk_nvme_identify 00:02:49.040 LINK led 00:02:49.302 LINK hello_sock 00:02:49.302 LINK scheduler 00:02:49.302 LINK spdk_top 00:02:49.302 LINK thread 00:02:49.302 CC test/nvme/e2edp/nvme_dp.o 00:02:49.302 LINK idxd_perf 00:02:49.302 CC test/nvme/reset/reset.o 00:02:49.302 CC test/nvme/aer/aer.o 00:02:49.302 CC test/nvme/overhead/overhead.o 00:02:49.302 CC test/nvme/connect_stress/connect_stress.o 00:02:49.302 CC test/nvme/sgl/sgl.o 00:02:49.302 CC test/nvme/reserve/reserve.o 00:02:49.302 CC test/nvme/startup/startup.o 00:02:49.302 CC test/nvme/simple_copy/simple_copy.o 00:02:49.302 CC test/nvme/err_injection/err_injection.o 00:02:49.302 CC test/nvme/compliance/nvme_compliance.o 00:02:49.302 CC test/nvme/fused_ordering/fused_ordering.o 00:02:49.302 CC test/nvme/boot_partition/boot_partition.o 00:02:49.302 CC test/nvme/cuse/cuse.o 00:02:49.302 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:49.302 CC test/nvme/fdp/fdp.o 00:02:49.563 CC test/blobfs/mkfs/mkfs.o 00:02:49.563 LINK vhost 00:02:49.563 CC test/accel/dif/dif.o 00:02:49.563 CC test/lvol/esnap/esnap.o 00:02:49.563 LINK startup 00:02:49.563 CC examples/nvme/hotplug/hotplug.o 00:02:49.563 CC examples/nvme/hello_world/hello_world.o 00:02:49.563 LINK err_injection 00:02:49.563 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:49.563 CC examples/nvme/reconnect/reconnect.o 00:02:49.563 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:49.563 CC examples/nvme/abort/abort.o 00:02:49.563 LINK connect_stress 00:02:49.563 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:49.563 CC examples/nvme/arbitration/arbitration.o 00:02:49.563 LINK fused_ordering 00:02:49.563 LINK mkfs 00:02:49.824 LINK boot_partition 00:02:49.824 LINK simple_copy 00:02:49.824 LINK reserve 00:02:49.824 LINK reset 00:02:49.824 LINK aer 00:02:49.824 LINK doorbell_aers 00:02:49.824 LINK memory_ut 00:02:49.824 CC examples/accel/perf/accel_perf.o 00:02:49.824 LINK overhead 00:02:49.824 LINK fdp 00:02:49.824 LINK nvme_compliance 00:02:49.824 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:49.824 CC examples/blob/cli/blobcli.o 00:02:49.824 LINK sgl 00:02:49.824 LINK nvme_dp 00:02:49.824 CC examples/blob/hello_world/hello_blob.o 00:02:50.084 LINK hello_world 00:02:50.084 LINK hotplug 00:02:50.084 LINK pmr_persistence 00:02:50.084 LINK cmb_copy 00:02:50.084 LINK reconnect 00:02:50.084 LINK arbitration 00:02:50.084 LINK hello_blob 00:02:50.365 LINK abort 00:02:50.366 LINK dif 00:02:50.366 LINK hello_fsdev 00:02:50.366 LINK accel_perf 00:02:50.366 LINK nvme_manage 00:02:50.366 LINK blobcli 00:02:50.624 CC test/bdev/bdevio/bdevio.o 00:02:50.624 LINK iscsi_fuzz 00:02:50.624 CC examples/bdev/hello_world/hello_bdev.o 00:02:50.883 CC examples/bdev/bdevperf/bdevperf.o 00:02:50.883 LINK cuse 00:02:50.883 LINK hello_bdev 00:02:51.141 LINK bdevio 00:02:51.708 LINK bdevperf 00:02:51.967 CC examples/nvmf/nvmf/nvmf.o 00:02:52.225 LINK nvmf 00:02:54.804 LINK esnap 00:02:55.062 00:02:55.062 real 1m10.490s 00:02:55.062 user 11m50.646s 00:02:55.062 sys 2m39.594s 00:02:55.062 03:53:23 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:55.062 03:53:23 make -- common/autotest_common.sh@10 -- $ set +x 00:02:55.062 ************************************ 00:02:55.062 END TEST make 00:02:55.062 ************************************ 00:02:55.062 03:53:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:55.062 03:53:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:55.062 03:53:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:55.062 03:53:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.062 03:53:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:55.062 03:53:23 -- pm/common@44 -- $ pid=30132 00:02:55.062 03:53:23 -- pm/common@50 -- $ kill -TERM 30132 00:02:55.062 03:53:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.062 03:53:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:55.062 03:53:23 -- pm/common@44 -- $ pid=30134 00:02:55.062 03:53:23 -- pm/common@50 -- $ kill -TERM 30134 00:02:55.062 03:53:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.062 03:53:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:55.062 03:53:23 -- pm/common@44 -- $ pid=30135 00:02:55.062 03:53:23 -- pm/common@50 -- $ kill -TERM 30135 00:02:55.062 03:53:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.062 03:53:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:55.062 03:53:23 -- pm/common@44 -- $ pid=30166 00:02:55.062 03:53:23 -- pm/common@50 -- $ sudo -E kill -TERM 30166 00:02:55.062 03:53:23 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:55.062 03:53:23 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:55.321 03:53:23 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:02:55.321 03:53:23 -- common/autotest_common.sh@1711 -- # lcov --version 00:02:55.321 03:53:23 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:02:55.321 03:53:23 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:02:55.321 03:53:23 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:55.321 03:53:23 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:55.321 03:53:23 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:55.321 03:53:23 -- scripts/common.sh@336 -- # IFS=.-: 00:02:55.321 03:53:23 -- scripts/common.sh@336 -- # read -ra ver1 00:02:55.321 03:53:23 -- scripts/common.sh@337 -- # IFS=.-: 00:02:55.321 03:53:23 -- scripts/common.sh@337 -- # read -ra ver2 00:02:55.321 03:53:23 -- scripts/common.sh@338 -- # local 'op=<' 00:02:55.321 03:53:23 -- scripts/common.sh@340 -- # ver1_l=2 00:02:55.321 03:53:23 -- scripts/common.sh@341 -- # ver2_l=1 00:02:55.321 03:53:23 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:55.321 03:53:23 -- scripts/common.sh@344 -- # case "$op" in 00:02:55.321 03:53:23 -- scripts/common.sh@345 -- # : 1 00:02:55.321 03:53:23 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:55.321 03:53:23 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:55.321 03:53:23 -- scripts/common.sh@365 -- # decimal 1 00:02:55.321 03:53:23 -- scripts/common.sh@353 -- # local d=1 00:02:55.321 03:53:23 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:55.321 03:53:23 -- scripts/common.sh@355 -- # echo 1 00:02:55.321 03:53:23 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:55.321 03:53:23 -- scripts/common.sh@366 -- # decimal 2 00:02:55.321 03:53:23 -- scripts/common.sh@353 -- # local d=2 00:02:55.321 03:53:23 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:55.321 03:53:23 -- scripts/common.sh@355 -- # echo 2 00:02:55.322 03:53:23 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:55.322 03:53:23 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:55.322 03:53:23 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:55.322 03:53:23 -- scripts/common.sh@368 -- # return 0 00:02:55.322 03:53:23 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:55.322 03:53:23 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:02:55.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:55.322 --rc genhtml_branch_coverage=1 00:02:55.322 --rc genhtml_function_coverage=1 00:02:55.322 --rc genhtml_legend=1 00:02:55.322 --rc geninfo_all_blocks=1 00:02:55.322 --rc geninfo_unexecuted_blocks=1 00:02:55.322 00:02:55.322 ' 00:02:55.322 03:53:23 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:02:55.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:55.322 --rc genhtml_branch_coverage=1 00:02:55.322 --rc genhtml_function_coverage=1 00:02:55.322 --rc genhtml_legend=1 00:02:55.322 --rc geninfo_all_blocks=1 00:02:55.322 --rc geninfo_unexecuted_blocks=1 00:02:55.322 00:02:55.322 ' 00:02:55.322 03:53:23 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:02:55.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:55.322 --rc genhtml_branch_coverage=1 00:02:55.322 --rc genhtml_function_coverage=1 00:02:55.322 --rc genhtml_legend=1 00:02:55.322 --rc geninfo_all_blocks=1 00:02:55.322 --rc geninfo_unexecuted_blocks=1 00:02:55.322 00:02:55.322 ' 00:02:55.322 03:53:23 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:02:55.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:55.322 --rc genhtml_branch_coverage=1 00:02:55.322 --rc genhtml_function_coverage=1 00:02:55.322 --rc genhtml_legend=1 00:02:55.322 --rc geninfo_all_blocks=1 00:02:55.322 --rc geninfo_unexecuted_blocks=1 00:02:55.322 00:02:55.322 ' 00:02:55.322 03:53:23 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:55.322 03:53:23 -- nvmf/common.sh@7 -- # uname -s 00:02:55.322 03:53:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:55.322 03:53:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:55.322 03:53:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:55.322 03:53:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:55.322 03:53:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:55.322 03:53:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:55.322 03:53:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:55.322 03:53:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:55.322 03:53:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:55.322 03:53:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:55.322 03:53:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:55.322 03:53:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:55.322 03:53:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:55.322 03:53:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:55.322 03:53:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:55.322 03:53:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:55.322 03:53:23 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:55.322 03:53:23 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:55.322 03:53:23 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:55.322 03:53:23 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:55.322 03:53:23 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:55.322 03:53:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.322 03:53:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.322 03:53:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.322 03:53:23 -- paths/export.sh@5 -- # export PATH 00:02:55.322 03:53:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:55.322 03:53:23 -- nvmf/common.sh@51 -- # : 0 00:02:55.322 03:53:23 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:55.322 03:53:23 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:55.322 03:53:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:55.322 03:53:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:55.322 03:53:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:55.322 03:53:23 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:55.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:55.322 03:53:23 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:55.322 03:53:23 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:55.322 03:53:23 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:55.322 03:53:23 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:55.322 03:53:23 -- spdk/autotest.sh@32 -- # uname -s 00:02:55.322 03:53:23 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:55.322 03:53:23 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:55.322 03:53:23 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:55.322 03:53:23 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:55.322 03:53:23 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:55.322 03:53:23 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:55.322 03:53:23 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:55.322 03:53:23 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:55.322 03:53:23 -- spdk/autotest.sh@48 -- # udevadm_pid=90236 00:02:55.322 03:53:23 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:55.322 03:53:23 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:55.322 03:53:23 -- pm/common@17 -- # local monitor 00:02:55.322 03:53:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.322 03:53:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.322 03:53:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.322 03:53:23 -- pm/common@21 -- # date +%s 00:02:55.322 03:53:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:55.322 03:53:23 -- pm/common@21 -- # date +%s 00:02:55.322 03:53:23 -- pm/common@25 -- # sleep 1 00:02:55.322 03:53:23 -- pm/common@21 -- # date +%s 00:02:55.322 03:53:23 -- pm/common@21 -- # date +%s 00:02:55.322 03:53:23 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733712803 00:02:55.322 03:53:23 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733712803 00:02:55.322 03:53:23 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733712803 00:02:55.322 03:53:23 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733712803 00:02:55.583 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733712803_collect-vmstat.pm.log 00:02:55.583 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733712803_collect-cpu-load.pm.log 00:02:55.583 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733712803_collect-cpu-temp.pm.log 00:02:55.583 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733712803_collect-bmc-pm.bmc.pm.log 00:02:56.524 03:53:24 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:56.524 03:53:24 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:56.524 03:53:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:56.524 03:53:24 -- common/autotest_common.sh@10 -- # set +x 00:02:56.524 03:53:24 -- spdk/autotest.sh@59 -- # create_test_list 00:02:56.524 03:53:24 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:56.524 03:53:24 -- common/autotest_common.sh@10 -- # set +x 00:02:56.524 03:53:24 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:56.524 03:53:24 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:56.524 03:53:24 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:56.524 03:53:24 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:56.524 03:53:24 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:56.524 03:53:24 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:56.524 03:53:24 -- common/autotest_common.sh@1457 -- # uname 00:02:56.524 03:53:24 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:56.524 03:53:24 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:56.524 03:53:24 -- common/autotest_common.sh@1477 -- # uname 00:02:56.524 03:53:24 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:56.524 03:53:24 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:56.524 03:53:24 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:56.524 lcov: LCOV version 1.15 00:02:56.524 03:53:25 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:14.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:14.613 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:36.528 03:54:02 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:36.528 03:54:02 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:36.528 03:54:02 -- common/autotest_common.sh@10 -- # set +x 00:03:36.528 03:54:02 -- spdk/autotest.sh@78 -- # rm -f 00:03:36.528 03:54:02 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.528 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:03:36.528 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:03:36.528 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:03:36.528 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:03:36.528 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:03:36.528 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:03:36.528 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:03:36.528 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:03:36.528 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:03:36.528 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:03:36.528 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:03:36.528 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:03:36.528 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:03:36.528 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:03:36.528 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:03:36.528 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:03:36.528 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:03:36.528 03:54:03 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:36.528 03:54:03 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:36.528 03:54:03 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:36.528 03:54:03 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:36.528 03:54:03 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:36.528 03:54:03 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:36.528 03:54:03 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:36.528 03:54:03 -- common/autotest_common.sh@1669 -- # bdf=0000:88:00.0 00:03:36.528 03:54:03 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:36.528 03:54:03 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:36.528 03:54:03 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:36.528 03:54:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:36.528 03:54:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:36.528 03:54:03 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:36.528 03:54:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:36.528 03:54:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:36.528 03:54:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:36.528 03:54:03 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:36.528 03:54:03 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:36.528 No valid GPT data, bailing 00:03:36.528 03:54:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:36.528 03:54:03 -- scripts/common.sh@394 -- # pt= 00:03:36.528 03:54:03 -- scripts/common.sh@395 -- # return 1 00:03:36.528 03:54:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:36.528 1+0 records in 00:03:36.528 1+0 records out 00:03:36.528 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00207613 s, 505 MB/s 00:03:36.528 03:54:03 -- spdk/autotest.sh@105 -- # sync 00:03:36.528 03:54:03 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:36.528 03:54:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:36.528 03:54:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:37.463 03:54:05 -- spdk/autotest.sh@111 -- # uname -s 00:03:37.463 03:54:05 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:37.463 03:54:05 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:37.463 03:54:05 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:38.837 Hugepages 00:03:38.837 node hugesize free / total 00:03:38.837 node0 1048576kB 0 / 0 00:03:38.837 node0 2048kB 0 / 0 00:03:38.837 node1 1048576kB 0 / 0 00:03:38.837 node1 2048kB 0 / 0 00:03:38.837 00:03:38.837 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:38.837 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:38.837 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:38.837 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:38.837 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:38.837 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:38.837 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:38.837 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:38.837 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:38.837 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:38.837 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:38.837 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:38.837 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:38.837 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:38.837 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:38.837 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:38.837 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:38.837 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:38.837 03:54:07 -- spdk/autotest.sh@117 -- # uname -s 00:03:38.837 03:54:07 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:38.837 03:54:07 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:38.837 03:54:07 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:40.216 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:40.216 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:40.216 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:40.216 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:40.216 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:40.216 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:40.216 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:40.216 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:40.216 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:40.216 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:40.216 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:40.216 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:40.216 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:40.216 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:40.216 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:40.216 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:41.155 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:41.415 03:54:09 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:42.355 03:54:10 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:42.355 03:54:10 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:42.355 03:54:10 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:42.355 03:54:10 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:42.355 03:54:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:42.355 03:54:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:42.355 03:54:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:42.355 03:54:10 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:42.355 03:54:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:42.355 03:54:10 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:42.355 03:54:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:42.355 03:54:10 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.734 Waiting for block devices as requested 00:03:43.734 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:43.734 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:43.734 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:43.992 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:43.992 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:43.992 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:43.992 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:44.251 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:44.251 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:44.251 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:44.251 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:44.508 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:44.509 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:44.509 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:44.771 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:44.771 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:44.771 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:44.771 03:54:13 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:44.771 03:54:13 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:45.029 03:54:13 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:45.029 03:54:13 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:03:45.029 03:54:13 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:45.029 03:54:13 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:45.029 03:54:13 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:45.029 03:54:13 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:45.029 03:54:13 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:45.029 03:54:13 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:45.029 03:54:13 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:45.029 03:54:13 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:45.029 03:54:13 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:45.029 03:54:13 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:03:45.029 03:54:13 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:45.029 03:54:13 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:45.029 03:54:13 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:45.029 03:54:13 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:45.029 03:54:13 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:45.029 03:54:13 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:45.029 03:54:13 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:45.029 03:54:13 -- common/autotest_common.sh@1543 -- # continue 00:03:45.029 03:54:13 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:45.029 03:54:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:45.029 03:54:13 -- common/autotest_common.sh@10 -- # set +x 00:03:45.029 03:54:13 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:45.029 03:54:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:45.029 03:54:13 -- common/autotest_common.sh@10 -- # set +x 00:03:45.029 03:54:13 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:46.419 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:46.419 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:46.419 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:46.419 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:46.419 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:46.419 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:46.419 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:46.419 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:46.419 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:46.419 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:46.419 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:46.419 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:46.419 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:46.419 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:46.419 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:46.419 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:47.353 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:47.353 03:54:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:47.353 03:54:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:47.353 03:54:15 -- common/autotest_common.sh@10 -- # set +x 00:03:47.353 03:54:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:47.353 03:54:15 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:47.353 03:54:15 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:47.353 03:54:15 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:47.353 03:54:15 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:47.353 03:54:15 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:47.353 03:54:15 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:47.353 03:54:15 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:47.353 03:54:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:47.353 03:54:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:47.353 03:54:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:47.353 03:54:15 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:47.353 03:54:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:47.353 03:54:15 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:47.353 03:54:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:47.353 03:54:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:47.353 03:54:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:47.353 03:54:15 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:47.353 03:54:15 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:47.353 03:54:15 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:47.353 03:54:15 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:47.353 03:54:15 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:03:47.353 03:54:15 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:03:47.353 03:54:15 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=101286 00:03:47.353 03:54:15 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:47.353 03:54:15 -- common/autotest_common.sh@1585 -- # waitforlisten 101286 00:03:47.353 03:54:15 -- common/autotest_common.sh@835 -- # '[' -z 101286 ']' 00:03:47.353 03:54:15 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:47.353 03:54:15 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:47.353 03:54:15 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:47.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:47.353 03:54:15 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:47.353 03:54:15 -- common/autotest_common.sh@10 -- # set +x 00:03:47.613 [2024-12-09 03:54:15.962795] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:03:47.613 [2024-12-09 03:54:15.962870] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101286 ] 00:03:47.613 [2024-12-09 03:54:16.029412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.613 [2024-12-09 03:54:16.088802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:47.871 03:54:16 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:47.871 03:54:16 -- common/autotest_common.sh@868 -- # return 0 00:03:47.871 03:54:16 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:47.871 03:54:16 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:47.871 03:54:16 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:51.160 nvme0n1 00:03:51.160 03:54:19 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:51.160 [2024-12-09 03:54:19.698908] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:51.160 [2024-12-09 03:54:19.698952] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:51.160 request: 00:03:51.161 { 00:03:51.161 "nvme_ctrlr_name": "nvme0", 00:03:51.161 "password": "test", 00:03:51.161 "method": "bdev_nvme_opal_revert", 00:03:51.161 "req_id": 1 00:03:51.161 } 00:03:51.161 Got JSON-RPC error response 00:03:51.161 response: 00:03:51.161 { 00:03:51.161 "code": -32603, 00:03:51.161 "message": "Internal error" 00:03:51.161 } 00:03:51.161 03:54:19 -- common/autotest_common.sh@1591 -- # true 00:03:51.161 03:54:19 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:51.161 03:54:19 -- common/autotest_common.sh@1595 -- # killprocess 101286 00:03:51.161 03:54:19 -- common/autotest_common.sh@954 -- # '[' -z 101286 ']' 00:03:51.161 03:54:19 -- common/autotest_common.sh@958 -- # kill -0 101286 00:03:51.161 03:54:19 -- common/autotest_common.sh@959 -- # uname 00:03:51.161 03:54:19 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:51.161 03:54:19 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101286 00:03:51.419 03:54:19 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:51.419 03:54:19 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:51.419 03:54:19 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101286' 00:03:51.419 killing process with pid 101286 00:03:51.419 03:54:19 -- common/autotest_common.sh@973 -- # kill 101286 00:03:51.419 03:54:19 -- common/autotest_common.sh@978 -- # wait 101286 00:03:53.323 03:54:21 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:53.323 03:54:21 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:53.323 03:54:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:53.323 03:54:21 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:53.323 03:54:21 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:53.323 03:54:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.323 03:54:21 -- common/autotest_common.sh@10 -- # set +x 00:03:53.323 03:54:21 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:53.323 03:54:21 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:53.323 03:54:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.323 03:54:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.323 03:54:21 -- common/autotest_common.sh@10 -- # set +x 00:03:53.323 ************************************ 00:03:53.323 START TEST env 00:03:53.323 ************************************ 00:03:53.323 03:54:21 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:53.323 * Looking for test storage... 00:03:53.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:53.323 03:54:21 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:53.323 03:54:21 env -- common/autotest_common.sh@1711 -- # lcov --version 00:03:53.323 03:54:21 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:53.323 03:54:21 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:53.323 03:54:21 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:53.323 03:54:21 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:53.323 03:54:21 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:53.323 03:54:21 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:53.323 03:54:21 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:53.323 03:54:21 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:53.323 03:54:21 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:53.323 03:54:21 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:53.323 03:54:21 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:53.323 03:54:21 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:53.323 03:54:21 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:53.323 03:54:21 env -- scripts/common.sh@344 -- # case "$op" in 00:03:53.323 03:54:21 env -- scripts/common.sh@345 -- # : 1 00:03:53.323 03:54:21 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:53.323 03:54:21 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:53.323 03:54:21 env -- scripts/common.sh@365 -- # decimal 1 00:03:53.323 03:54:21 env -- scripts/common.sh@353 -- # local d=1 00:03:53.323 03:54:21 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:53.323 03:54:21 env -- scripts/common.sh@355 -- # echo 1 00:03:53.323 03:54:21 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:53.323 03:54:21 env -- scripts/common.sh@366 -- # decimal 2 00:03:53.323 03:54:21 env -- scripts/common.sh@353 -- # local d=2 00:03:53.323 03:54:21 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:53.323 03:54:21 env -- scripts/common.sh@355 -- # echo 2 00:03:53.323 03:54:21 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:53.323 03:54:21 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:53.323 03:54:21 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:53.323 03:54:21 env -- scripts/common.sh@368 -- # return 0 00:03:53.323 03:54:21 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:53.323 03:54:21 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:53.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.323 --rc genhtml_branch_coverage=1 00:03:53.323 --rc genhtml_function_coverage=1 00:03:53.323 --rc genhtml_legend=1 00:03:53.323 --rc geninfo_all_blocks=1 00:03:53.323 --rc geninfo_unexecuted_blocks=1 00:03:53.323 00:03:53.323 ' 00:03:53.323 03:54:21 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:53.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.323 --rc genhtml_branch_coverage=1 00:03:53.323 --rc genhtml_function_coverage=1 00:03:53.323 --rc genhtml_legend=1 00:03:53.323 --rc geninfo_all_blocks=1 00:03:53.323 --rc geninfo_unexecuted_blocks=1 00:03:53.323 00:03:53.323 ' 00:03:53.323 03:54:21 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:53.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.323 --rc genhtml_branch_coverage=1 00:03:53.323 --rc genhtml_function_coverage=1 00:03:53.323 --rc genhtml_legend=1 00:03:53.323 --rc geninfo_all_blocks=1 00:03:53.323 --rc geninfo_unexecuted_blocks=1 00:03:53.323 00:03:53.323 ' 00:03:53.323 03:54:21 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:53.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:53.323 --rc genhtml_branch_coverage=1 00:03:53.323 --rc genhtml_function_coverage=1 00:03:53.323 --rc genhtml_legend=1 00:03:53.323 --rc geninfo_all_blocks=1 00:03:53.323 --rc geninfo_unexecuted_blocks=1 00:03:53.323 00:03:53.323 ' 00:03:53.323 03:54:21 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:53.323 03:54:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.323 03:54:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.323 03:54:21 env -- common/autotest_common.sh@10 -- # set +x 00:03:53.323 ************************************ 00:03:53.323 START TEST env_memory 00:03:53.323 ************************************ 00:03:53.323 03:54:21 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:53.323 00:03:53.323 00:03:53.323 CUnit - A unit testing framework for C - Version 2.1-3 00:03:53.323 http://cunit.sourceforge.net/ 00:03:53.323 00:03:53.323 00:03:53.323 Suite: memory 00:03:53.324 Test: alloc and free memory map ...[2024-12-09 03:54:21.771214] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:53.324 passed 00:03:53.324 Test: mem map translation ...[2024-12-09 03:54:21.791343] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:53.324 [2024-12-09 03:54:21.791364] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:53.324 [2024-12-09 03:54:21.791420] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:53.324 [2024-12-09 03:54:21.791432] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:53.324 passed 00:03:53.324 Test: mem map registration ...[2024-12-09 03:54:21.836531] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:53.324 [2024-12-09 03:54:21.836562] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:53.324 passed 00:03:53.324 Test: mem map adjacent registrations ...passed 00:03:53.324 00:03:53.324 Run Summary: Type Total Ran Passed Failed Inactive 00:03:53.324 suites 1 1 n/a 0 0 00:03:53.324 tests 4 4 4 0 0 00:03:53.324 asserts 152 152 152 0 n/a 00:03:53.324 00:03:53.324 Elapsed time = 0.146 seconds 00:03:53.324 00:03:53.324 real 0m0.154s 00:03:53.324 user 0m0.145s 00:03:53.324 sys 0m0.008s 00:03:53.324 03:54:21 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:53.324 03:54:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:53.324 ************************************ 00:03:53.324 END TEST env_memory 00:03:53.324 ************************************ 00:03:53.582 03:54:21 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:53.582 03:54:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:53.582 03:54:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:53.582 03:54:21 env -- common/autotest_common.sh@10 -- # set +x 00:03:53.582 ************************************ 00:03:53.582 START TEST env_vtophys 00:03:53.582 ************************************ 00:03:53.582 03:54:21 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:53.582 EAL: lib.eal log level changed from notice to debug 00:03:53.582 EAL: Detected lcore 0 as core 0 on socket 0 00:03:53.582 EAL: Detected lcore 1 as core 1 on socket 0 00:03:53.582 EAL: Detected lcore 2 as core 2 on socket 0 00:03:53.583 EAL: Detected lcore 3 as core 3 on socket 0 00:03:53.583 EAL: Detected lcore 4 as core 4 on socket 0 00:03:53.583 EAL: Detected lcore 5 as core 5 on socket 0 00:03:53.583 EAL: Detected lcore 6 as core 8 on socket 0 00:03:53.583 EAL: Detected lcore 7 as core 9 on socket 0 00:03:53.583 EAL: Detected lcore 8 as core 10 on socket 0 00:03:53.583 EAL: Detected lcore 9 as core 11 on socket 0 00:03:53.583 EAL: Detected lcore 10 as core 12 on socket 0 00:03:53.583 EAL: Detected lcore 11 as core 13 on socket 0 00:03:53.583 EAL: Detected lcore 12 as core 0 on socket 1 00:03:53.583 EAL: Detected lcore 13 as core 1 on socket 1 00:03:53.583 EAL: Detected lcore 14 as core 2 on socket 1 00:03:53.583 EAL: Detected lcore 15 as core 3 on socket 1 00:03:53.583 EAL: Detected lcore 16 as core 4 on socket 1 00:03:53.583 EAL: Detected lcore 17 as core 5 on socket 1 00:03:53.583 EAL: Detected lcore 18 as core 8 on socket 1 00:03:53.583 EAL: Detected lcore 19 as core 9 on socket 1 00:03:53.583 EAL: Detected lcore 20 as core 10 on socket 1 00:03:53.583 EAL: Detected lcore 21 as core 11 on socket 1 00:03:53.583 EAL: Detected lcore 22 as core 12 on socket 1 00:03:53.583 EAL: Detected lcore 23 as core 13 on socket 1 00:03:53.583 EAL: Detected lcore 24 as core 0 on socket 0 00:03:53.583 EAL: Detected lcore 25 as core 1 on socket 0 00:03:53.583 EAL: Detected lcore 26 as core 2 on socket 0 00:03:53.583 EAL: Detected lcore 27 as core 3 on socket 0 00:03:53.583 EAL: Detected lcore 28 as core 4 on socket 0 00:03:53.583 EAL: Detected lcore 29 as core 5 on socket 0 00:03:53.583 EAL: Detected lcore 30 as core 8 on socket 0 00:03:53.583 EAL: Detected lcore 31 as core 9 on socket 0 00:03:53.583 EAL: Detected lcore 32 as core 10 on socket 0 00:03:53.583 EAL: Detected lcore 33 as core 11 on socket 0 00:03:53.583 EAL: Detected lcore 34 as core 12 on socket 0 00:03:53.583 EAL: Detected lcore 35 as core 13 on socket 0 00:03:53.583 EAL: Detected lcore 36 as core 0 on socket 1 00:03:53.583 EAL: Detected lcore 37 as core 1 on socket 1 00:03:53.583 EAL: Detected lcore 38 as core 2 on socket 1 00:03:53.583 EAL: Detected lcore 39 as core 3 on socket 1 00:03:53.583 EAL: Detected lcore 40 as core 4 on socket 1 00:03:53.583 EAL: Detected lcore 41 as core 5 on socket 1 00:03:53.583 EAL: Detected lcore 42 as core 8 on socket 1 00:03:53.583 EAL: Detected lcore 43 as core 9 on socket 1 00:03:53.583 EAL: Detected lcore 44 as core 10 on socket 1 00:03:53.583 EAL: Detected lcore 45 as core 11 on socket 1 00:03:53.583 EAL: Detected lcore 46 as core 12 on socket 1 00:03:53.583 EAL: Detected lcore 47 as core 13 on socket 1 00:03:53.583 EAL: Maximum logical cores by configuration: 128 00:03:53.583 EAL: Detected CPU lcores: 48 00:03:53.583 EAL: Detected NUMA nodes: 2 00:03:53.583 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:53.583 EAL: Detected shared linkage of DPDK 00:03:53.583 EAL: No shared files mode enabled, IPC will be disabled 00:03:53.583 EAL: Bus pci wants IOVA as 'DC' 00:03:53.583 EAL: Buses did not request a specific IOVA mode. 00:03:53.583 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:53.583 EAL: Selected IOVA mode 'VA' 00:03:53.583 EAL: Probing VFIO support... 00:03:53.583 EAL: IOMMU type 1 (Type 1) is supported 00:03:53.583 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:53.583 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:53.583 EAL: VFIO support initialized 00:03:53.583 EAL: Ask a virtual area of 0x2e000 bytes 00:03:53.583 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:53.583 EAL: Setting up physically contiguous memory... 00:03:53.583 EAL: Setting maximum number of open files to 524288 00:03:53.583 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:53.583 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:53.583 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:53.583 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.583 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:53.583 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.583 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.583 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:53.583 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:53.583 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.583 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:53.583 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.583 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.583 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:53.583 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:53.583 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.583 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:53.583 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.583 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.583 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:53.583 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:53.583 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.583 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:53.583 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:53.583 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.583 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:53.583 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:53.583 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:53.583 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.583 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:53.583 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:53.583 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.583 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:53.583 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:53.583 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.583 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:53.583 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:53.583 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.583 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:53.583 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:53.583 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.583 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:53.583 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:53.583 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.583 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:53.583 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:53.583 EAL: Ask a virtual area of 0x61000 bytes 00:03:53.583 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:53.583 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:53.583 EAL: Ask a virtual area of 0x400000000 bytes 00:03:53.583 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:53.583 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:53.583 EAL: Hugepages will be freed exactly as allocated. 00:03:53.583 EAL: No shared files mode enabled, IPC is disabled 00:03:53.583 EAL: No shared files mode enabled, IPC is disabled 00:03:53.583 EAL: TSC frequency is ~2700000 KHz 00:03:53.583 EAL: Main lcore 0 is ready (tid=7f1fac364a00;cpuset=[0]) 00:03:53.583 EAL: Trying to obtain current memory policy. 00:03:53.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.583 EAL: Restoring previous memory policy: 0 00:03:53.583 EAL: request: mp_malloc_sync 00:03:53.583 EAL: No shared files mode enabled, IPC is disabled 00:03:53.583 EAL: Heap on socket 0 was expanded by 2MB 00:03:53.583 EAL: No shared files mode enabled, IPC is disabled 00:03:53.583 EAL: No PCI address specified using 'addr=<id>' in: bus=pci 00:03:53.583 EAL: Mem event callback 'spdk:(nil)' registered 00:03:53.583 00:03:53.583 00:03:53.583 CUnit - A unit testing framework for C - Version 2.1-3 00:03:53.583 http://cunit.sourceforge.net/ 00:03:53.583 00:03:53.583 00:03:53.583 Suite: components_suite 00:03:53.583 Test: vtophys_malloc_test ...passed 00:03:53.583 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:53.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.583 EAL: Restoring previous memory policy: 4 00:03:53.583 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.583 EAL: request: mp_malloc_sync 00:03:53.583 EAL: No shared files mode enabled, IPC is disabled 00:03:53.583 EAL: Heap on socket 0 was expanded by 4MB 00:03:53.583 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.583 EAL: request: mp_malloc_sync 00:03:53.583 EAL: No shared files mode enabled, IPC is disabled 00:03:53.583 EAL: Heap on socket 0 was shrunk by 4MB 00:03:53.583 EAL: Trying to obtain current memory policy. 00:03:53.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.583 EAL: Restoring previous memory policy: 4 00:03:53.583 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.583 EAL: request: mp_malloc_sync 00:03:53.583 EAL: No shared files mode enabled, IPC is disabled 00:03:53.583 EAL: Heap on socket 0 was expanded by 6MB 00:03:53.583 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.583 EAL: request: mp_malloc_sync 00:03:53.583 EAL: No shared files mode enabled, IPC is disabled 00:03:53.583 EAL: Heap on socket 0 was shrunk by 6MB 00:03:53.583 EAL: Trying to obtain current memory policy. 00:03:53.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.583 EAL: Restoring previous memory policy: 4 00:03:53.583 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.583 EAL: request: mp_malloc_sync 00:03:53.583 EAL: No shared files mode enabled, IPC is disabled 00:03:53.583 EAL: Heap on socket 0 was expanded by 10MB 00:03:53.583 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.583 EAL: request: mp_malloc_sync 00:03:53.583 EAL: No shared files mode enabled, IPC is disabled 00:03:53.583 EAL: Heap on socket 0 was shrunk by 10MB 00:03:53.583 EAL: Trying to obtain current memory policy. 00:03:53.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.583 EAL: Restoring previous memory policy: 4 00:03:53.583 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.583 EAL: request: mp_malloc_sync 00:03:53.583 EAL: No shared files mode enabled, IPC is disabled 00:03:53.583 EAL: Heap on socket 0 was expanded by 18MB 00:03:53.583 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.583 EAL: request: mp_malloc_sync 00:03:53.583 EAL: No shared files mode enabled, IPC is disabled 00:03:53.583 EAL: Heap on socket 0 was shrunk by 18MB 00:03:53.583 EAL: Trying to obtain current memory policy. 00:03:53.583 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.583 EAL: Restoring previous memory policy: 4 00:03:53.583 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.584 EAL: request: mp_malloc_sync 00:03:53.584 EAL: No shared files mode enabled, IPC is disabled 00:03:53.584 EAL: Heap on socket 0 was expanded by 34MB 00:03:53.584 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.584 EAL: request: mp_malloc_sync 00:03:53.584 EAL: No shared files mode enabled, IPC is disabled 00:03:53.584 EAL: Heap on socket 0 was shrunk by 34MB 00:03:53.584 EAL: Trying to obtain current memory policy. 00:03:53.584 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.584 EAL: Restoring previous memory policy: 4 00:03:53.584 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.584 EAL: request: mp_malloc_sync 00:03:53.584 EAL: No shared files mode enabled, IPC is disabled 00:03:53.584 EAL: Heap on socket 0 was expanded by 66MB 00:03:53.584 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.584 EAL: request: mp_malloc_sync 00:03:53.584 EAL: No shared files mode enabled, IPC is disabled 00:03:53.584 EAL: Heap on socket 0 was shrunk by 66MB 00:03:53.584 EAL: Trying to obtain current memory policy. 00:03:53.584 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.584 EAL: Restoring previous memory policy: 4 00:03:53.584 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.584 EAL: request: mp_malloc_sync 00:03:53.584 EAL: No shared files mode enabled, IPC is disabled 00:03:53.584 EAL: Heap on socket 0 was expanded by 130MB 00:03:53.584 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.842 EAL: request: mp_malloc_sync 00:03:53.842 EAL: No shared files mode enabled, IPC is disabled 00:03:53.842 EAL: Heap on socket 0 was shrunk by 130MB 00:03:53.842 EAL: Trying to obtain current memory policy. 00:03:53.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:53.842 EAL: Restoring previous memory policy: 4 00:03:53.842 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.842 EAL: request: mp_malloc_sync 00:03:53.842 EAL: No shared files mode enabled, IPC is disabled 00:03:53.842 EAL: Heap on socket 0 was expanded by 258MB 00:03:53.842 EAL: Calling mem event callback 'spdk:(nil)' 00:03:53.842 EAL: request: mp_malloc_sync 00:03:53.842 EAL: No shared files mode enabled, IPC is disabled 00:03:53.842 EAL: Heap on socket 0 was shrunk by 258MB 00:03:53.842 EAL: Trying to obtain current memory policy. 00:03:53.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.100 EAL: Restoring previous memory policy: 4 00:03:54.100 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.100 EAL: request: mp_malloc_sync 00:03:54.100 EAL: No shared files mode enabled, IPC is disabled 00:03:54.100 EAL: Heap on socket 0 was expanded by 514MB 00:03:54.100 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.359 EAL: request: mp_malloc_sync 00:03:54.359 EAL: No shared files mode enabled, IPC is disabled 00:03:54.359 EAL: Heap on socket 0 was shrunk by 514MB 00:03:54.359 EAL: Trying to obtain current memory policy. 00:03:54.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:54.617 EAL: Restoring previous memory policy: 4 00:03:54.617 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.617 EAL: request: mp_malloc_sync 00:03:54.617 EAL: No shared files mode enabled, IPC is disabled 00:03:54.617 EAL: Heap on socket 0 was expanded by 1026MB 00:03:54.617 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.876 EAL: request: mp_malloc_sync 00:03:54.876 EAL: No shared files mode enabled, IPC is disabled 00:03:54.876 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:54.876 passed 00:03:54.876 00:03:54.876 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.876 suites 1 1 n/a 0 0 00:03:54.876 tests 2 2 2 0 0 00:03:54.876 asserts 497 497 497 0 n/a 00:03:54.876 00:03:54.876 Elapsed time = 1.339 seconds 00:03:54.876 EAL: Calling mem event callback 'spdk:(nil)' 00:03:54.876 EAL: request: mp_malloc_sync 00:03:54.876 EAL: No shared files mode enabled, IPC is disabled 00:03:54.876 EAL: Heap on socket 0 was shrunk by 2MB 00:03:54.876 EAL: No shared files mode enabled, IPC is disabled 00:03:54.876 EAL: No shared files mode enabled, IPC is disabled 00:03:54.876 EAL: No shared files mode enabled, IPC is disabled 00:03:54.876 00:03:54.876 real 0m1.460s 00:03:54.876 user 0m0.866s 00:03:54.876 sys 0m0.557s 00:03:54.876 03:54:23 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.876 03:54:23 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:54.876 ************************************ 00:03:54.876 END TEST env_vtophys 00:03:54.876 ************************************ 00:03:54.876 03:54:23 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:54.876 03:54:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.876 03:54:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.876 03:54:23 env -- common/autotest_common.sh@10 -- # set +x 00:03:54.876 ************************************ 00:03:54.876 START TEST env_pci 00:03:54.876 ************************************ 00:03:54.876 03:54:23 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:55.136 00:03:55.136 00:03:55.136 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.136 http://cunit.sourceforge.net/ 00:03:55.136 00:03:55.136 00:03:55.136 Suite: pci 00:03:55.136 Test: pci_hook ...[2024-12-09 03:54:23.458121] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 102183 has claimed it 00:03:55.136 EAL: Cannot find device (10000:00:01.0) 00:03:55.136 EAL: Failed to attach device on primary process 00:03:55.136 passed 00:03:55.136 00:03:55.136 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.136 suites 1 1 n/a 0 0 00:03:55.136 tests 1 1 1 0 0 00:03:55.136 asserts 25 25 25 0 n/a 00:03:55.136 00:03:55.136 Elapsed time = 0.022 seconds 00:03:55.136 00:03:55.136 real 0m0.035s 00:03:55.136 user 0m0.014s 00:03:55.136 sys 0m0.021s 00:03:55.136 03:54:23 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.136 03:54:23 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:55.136 ************************************ 00:03:55.136 END TEST env_pci 00:03:55.136 ************************************ 00:03:55.136 03:54:23 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:55.136 03:54:23 env -- env/env.sh@15 -- # uname 00:03:55.136 03:54:23 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:55.136 03:54:23 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:55.136 03:54:23 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:55.136 03:54:23 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:55.136 03:54:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.136 03:54:23 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.136 ************************************ 00:03:55.136 START TEST env_dpdk_post_init 00:03:55.136 ************************************ 00:03:55.136 03:54:23 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:55.136 EAL: Detected CPU lcores: 48 00:03:55.136 EAL: Detected NUMA nodes: 2 00:03:55.136 EAL: Detected shared linkage of DPDK 00:03:55.136 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:55.136 EAL: Selected IOVA mode 'VA' 00:03:55.136 EAL: VFIO support initialized 00:03:55.136 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:55.136 EAL: Using IOMMU type 1 (Type 1) 00:03:55.136 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:55.136 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:55.136 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:55.136 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:55.136 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:55.397 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:55.397 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:55.397 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:55.397 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:55.397 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:55.397 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:55.397 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:55.397 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:55.397 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:55.397 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:55.397 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:56.336 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:59.613 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:59.613 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:59.613 Starting DPDK initialization... 00:03:59.613 Starting SPDK post initialization... 00:03:59.613 SPDK NVMe probe 00:03:59.613 Attaching to 0000:88:00.0 00:03:59.613 Attached to 0000:88:00.0 00:03:59.613 Cleaning up... 00:03:59.613 00:03:59.613 real 0m4.387s 00:03:59.613 user 0m2.990s 00:03:59.613 sys 0m0.456s 00:03:59.613 03:54:27 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.613 03:54:27 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:59.613 ************************************ 00:03:59.613 END TEST env_dpdk_post_init 00:03:59.613 ************************************ 00:03:59.613 03:54:27 env -- env/env.sh@26 -- # uname 00:03:59.613 03:54:27 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:59.613 03:54:27 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:59.613 03:54:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.613 03:54:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.613 03:54:27 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.613 ************************************ 00:03:59.613 START TEST env_mem_callbacks 00:03:59.613 ************************************ 00:03:59.613 03:54:27 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:59.613 EAL: Detected CPU lcores: 48 00:03:59.613 EAL: Detected NUMA nodes: 2 00:03:59.613 EAL: Detected shared linkage of DPDK 00:03:59.613 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:59.613 EAL: Selected IOVA mode 'VA' 00:03:59.613 EAL: VFIO support initialized 00:03:59.613 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:59.613 00:03:59.613 00:03:59.613 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.613 http://cunit.sourceforge.net/ 00:03:59.613 00:03:59.613 00:03:59.613 Suite: memory 00:03:59.613 Test: test ... 00:03:59.613 register 0x200000200000 2097152 00:03:59.613 malloc 3145728 00:03:59.613 register 0x200000400000 4194304 00:03:59.613 buf 0x200000500000 len 3145728 PASSED 00:03:59.613 malloc 64 00:03:59.613 buf 0x2000004fff40 len 64 PASSED 00:03:59.613 malloc 4194304 00:03:59.613 register 0x200000800000 6291456 00:03:59.613 buf 0x200000a00000 len 4194304 PASSED 00:03:59.613 free 0x200000500000 3145728 00:03:59.613 free 0x2000004fff40 64 00:03:59.613 unregister 0x200000400000 4194304 PASSED 00:03:59.613 free 0x200000a00000 4194304 00:03:59.613 unregister 0x200000800000 6291456 PASSED 00:03:59.613 malloc 8388608 00:03:59.613 register 0x200000400000 10485760 00:03:59.613 buf 0x200000600000 len 8388608 PASSED 00:03:59.613 free 0x200000600000 8388608 00:03:59.613 unregister 0x200000400000 10485760 PASSED 00:03:59.613 passed 00:03:59.613 00:03:59.613 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.613 suites 1 1 n/a 0 0 00:03:59.613 tests 1 1 1 0 0 00:03:59.613 asserts 15 15 15 0 n/a 00:03:59.613 00:03:59.613 Elapsed time = 0.005 seconds 00:03:59.613 00:03:59.613 real 0m0.050s 00:03:59.613 user 0m0.012s 00:03:59.613 sys 0m0.037s 00:03:59.613 03:54:28 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.613 03:54:28 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:59.613 ************************************ 00:03:59.613 END TEST env_mem_callbacks 00:03:59.613 ************************************ 00:03:59.613 00:03:59.613 real 0m6.479s 00:03:59.613 user 0m4.230s 00:03:59.613 sys 0m1.292s 00:03:59.613 03:54:28 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.613 03:54:28 env -- common/autotest_common.sh@10 -- # set +x 00:03:59.613 ************************************ 00:03:59.613 END TEST env 00:03:59.613 ************************************ 00:03:59.613 03:54:28 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:59.613 03:54:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:59.613 03:54:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.613 03:54:28 -- common/autotest_common.sh@10 -- # set +x 00:03:59.613 ************************************ 00:03:59.613 START TEST rpc 00:03:59.613 ************************************ 00:03:59.613 03:54:28 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:59.613 * Looking for test storage... 00:03:59.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:59.613 03:54:28 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:59.613 03:54:28 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:03:59.613 03:54:28 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:59.872 03:54:28 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:59.872 03:54:28 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:59.872 03:54:28 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:59.872 03:54:28 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:59.872 03:54:28 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:59.872 03:54:28 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:59.872 03:54:28 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:59.872 03:54:28 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:59.872 03:54:28 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:59.872 03:54:28 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:59.872 03:54:28 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:59.872 03:54:28 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:59.872 03:54:28 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:59.872 03:54:28 rpc -- scripts/common.sh@345 -- # : 1 00:03:59.872 03:54:28 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:59.872 03:54:28 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:59.872 03:54:28 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:59.872 03:54:28 rpc -- scripts/common.sh@353 -- # local d=1 00:03:59.872 03:54:28 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:59.872 03:54:28 rpc -- scripts/common.sh@355 -- # echo 1 00:03:59.872 03:54:28 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:59.872 03:54:28 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:59.872 03:54:28 rpc -- scripts/common.sh@353 -- # local d=2 00:03:59.872 03:54:28 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:59.872 03:54:28 rpc -- scripts/common.sh@355 -- # echo 2 00:03:59.872 03:54:28 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:59.872 03:54:28 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:59.872 03:54:28 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:59.872 03:54:28 rpc -- scripts/common.sh@368 -- # return 0 00:03:59.872 03:54:28 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:59.872 03:54:28 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:59.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.872 --rc genhtml_branch_coverage=1 00:03:59.872 --rc genhtml_function_coverage=1 00:03:59.872 --rc genhtml_legend=1 00:03:59.872 --rc geninfo_all_blocks=1 00:03:59.872 --rc geninfo_unexecuted_blocks=1 00:03:59.872 00:03:59.872 ' 00:03:59.872 03:54:28 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:59.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.872 --rc genhtml_branch_coverage=1 00:03:59.872 --rc genhtml_function_coverage=1 00:03:59.872 --rc genhtml_legend=1 00:03:59.872 --rc geninfo_all_blocks=1 00:03:59.872 --rc geninfo_unexecuted_blocks=1 00:03:59.872 00:03:59.872 ' 00:03:59.872 03:54:28 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:59.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.872 --rc genhtml_branch_coverage=1 00:03:59.872 --rc genhtml_function_coverage=1 00:03:59.872 --rc genhtml_legend=1 00:03:59.872 --rc geninfo_all_blocks=1 00:03:59.872 --rc geninfo_unexecuted_blocks=1 00:03:59.872 00:03:59.872 ' 00:03:59.872 03:54:28 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:59.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:59.872 --rc genhtml_branch_coverage=1 00:03:59.872 --rc genhtml_function_coverage=1 00:03:59.872 --rc genhtml_legend=1 00:03:59.872 --rc geninfo_all_blocks=1 00:03:59.872 --rc geninfo_unexecuted_blocks=1 00:03:59.872 00:03:59.872 ' 00:03:59.872 03:54:28 rpc -- rpc/rpc.sh@65 -- # spdk_pid=102971 00:03:59.872 03:54:28 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:59.872 03:54:28 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:59.872 03:54:28 rpc -- rpc/rpc.sh@67 -- # waitforlisten 102971 00:03:59.872 03:54:28 rpc -- common/autotest_common.sh@835 -- # '[' -z 102971 ']' 00:03:59.872 03:54:28 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:59.872 03:54:28 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:59.872 03:54:28 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:59.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:59.872 03:54:28 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:59.872 03:54:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:59.872 [2024-12-09 03:54:28.288865] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:03:59.872 [2024-12-09 03:54:28.288936] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102971 ] 00:03:59.872 [2024-12-09 03:54:28.358079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.872 [2024-12-09 03:54:28.417881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:59.872 [2024-12-09 03:54:28.417952] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 102971' to capture a snapshot of events at runtime. 00:03:59.872 [2024-12-09 03:54:28.417980] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:59.872 [2024-12-09 03:54:28.417991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:59.873 [2024-12-09 03:54:28.418001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid102971 for offline analysis/debug. 00:03:59.873 [2024-12-09 03:54:28.418637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.131 03:54:28 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:00.131 03:54:28 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:00.131 03:54:28 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:00.131 03:54:28 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:00.131 03:54:28 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:00.131 03:54:28 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:00.131 03:54:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.131 03:54:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.131 03:54:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.389 ************************************ 00:04:00.389 START TEST rpc_integrity 00:04:00.389 ************************************ 00:04:00.389 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:00.389 03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:00.389 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.389 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.389 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.389 03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:00.389 03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:00.389 03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:00.389 03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:00.389 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.389 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.389 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.389 03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:00.389 03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:00.389 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.389 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.389 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.389 03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:00.389 { 00:04:00.389 "name": "Malloc0", 00:04:00.389 "aliases": [ 00:04:00.389 "5df7be83-e014-4f84-988d-ec3b20c432a4" 00:04:00.389 ], 00:04:00.389 "product_name": "Malloc disk", 00:04:00.389 "block_size": 512, 00:04:00.389 "num_blocks": 16384, 00:04:00.389 "uuid": "5df7be83-e014-4f84-988d-ec3b20c432a4", 00:04:00.390 "assigned_rate_limits": { 00:04:00.390 "rw_ios_per_sec": 0, 00:04:00.390 "rw_mbytes_per_sec": 0, 00:04:00.390 "r_mbytes_per_sec": 0, 00:04:00.390 "w_mbytes_per_sec": 0 00:04:00.390 }, 00:04:00.390 "claimed": false, 00:04:00.390 "zoned": false, 00:04:00.390 "supported_io_types": { 00:04:00.390 "read": true, 00:04:00.390 "write": true, 00:04:00.390 "unmap": true, 00:04:00.390 "flush": true, 00:04:00.390 "reset": true, 00:04:00.390 "nvme_admin": false, 00:04:00.390 "nvme_io": false, 00:04:00.390 "nvme_io_md": false, 00:04:00.390 "write_zeroes": true, 00:04:00.390 "zcopy": true, 00:04:00.390 "get_zone_info": false, 00:04:00.390 "zone_management": false, 00:04:00.390 "zone_append": false, 00:04:00.390 "compare": false, 00:04:00.390 "compare_and_write": false, 00:04:00.390 "abort": true, 00:04:00.390 "seek_hole": false, 00:04:00.390 "seek_data": false, 00:04:00.390 "copy": true, 00:04:00.390 "nvme_iov_md": false 00:04:00.390 }, 00:04:00.390 "memory_domains": [ 00:04:00.390 { 00:04:00.390 "dma_device_id": "system", 00:04:00.390 "dma_device_type": 1 00:04:00.390 }, 00:04:00.390 { 00:04:00.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.390 "dma_device_type": 2 00:04:00.390 } 00:04:00.390 ], 00:04:00.390 "driver_specific": {} 00:04:00.390 } 00:04:00.390 ]' 00:04:00.390 03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:00.390 03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:00.390 03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:00.390 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.390 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.390 [2024-12-09 03:54:28.818704] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:00.390 [2024-12-09 03:54:28.818760] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:00.390 [2024-12-09 03:54:28.818784] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc46020 00:04:00.390 [2024-12-09 03:54:28.818796] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:00.390 [2024-12-09 03:54:28.820177] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:00.390 [2024-12-09 03:54:28.820199] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:00.390 Passthru0 00:04:00.390 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.390 03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:00.390 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.390 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.390 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.390 03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:00.390 { 00:04:00.390 "name": "Malloc0", 00:04:00.390 "aliases": [ 00:04:00.390 "5df7be83-e014-4f84-988d-ec3b20c432a4" 00:04:00.390 ], 00:04:00.390 "product_name": "Malloc disk", 00:04:00.390 "block_size": 512, 00:04:00.390 "num_blocks": 16384, 00:04:00.390 "uuid": "5df7be83-e014-4f84-988d-ec3b20c432a4", 00:04:00.390 "assigned_rate_limits": { 00:04:00.390 "rw_ios_per_sec": 0, 00:04:00.390 "rw_mbytes_per_sec": 0, 00:04:00.390 "r_mbytes_per_sec": 0, 00:04:00.390 "w_mbytes_per_sec": 0 00:04:00.390 }, 00:04:00.390 "claimed": true, 00:04:00.390 "claim_type": "exclusive_write", 00:04:00.390 "zoned": false, 00:04:00.390 "supported_io_types": { 00:04:00.390 "read": true, 00:04:00.390 "write": true, 00:04:00.390 "unmap": true, 00:04:00.390 "flush": true, 00:04:00.390 "reset": true, 00:04:00.390 "nvme_admin": false, 00:04:00.390 "nvme_io": false, 00:04:00.390 "nvme_io_md": false, 00:04:00.390 "write_zeroes": true, 00:04:00.390 "zcopy": true, 00:04:00.390 "get_zone_info": false, 00:04:00.390 "zone_management": false, 00:04:00.390 "zone_append": false, 00:04:00.390 "compare": false, 00:04:00.390 "compare_and_write": false, 00:04:00.390 "abort": true, 00:04:00.390 "seek_hole": false, 00:04:00.390 "seek_data": false, 00:04:00.390 "copy": true, 00:04:00.390 "nvme_iov_md": false 00:04:00.390 }, 00:04:00.390 "memory_domains": [ 00:04:00.390 { 00:04:00.390 "dma_device_id": "system", 00:04:00.390 "dma_device_type": 1 00:04:00.390 }, 00:04:00.390 { 00:04:00.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.390 "dma_device_type": 2 00:04:00.390 } 00:04:00.390 ], 00:04:00.390 "driver_specific": {} 00:04:00.390 }, 00:04:00.390 { 00:04:00.390 "name": "Passthru0", 00:04:00.390 "aliases": [ 00:04:00.390 "0c0a20eb-0403-5ba1-a68a-c1983a390590" 00:04:00.390 ], 00:04:00.390 "product_name": "passthru", 00:04:00.390 "block_size": 512, 00:04:00.390 "num_blocks": 16384, 00:04:00.390 "uuid": "0c0a20eb-0403-5ba1-a68a-c1983a390590", 00:04:00.390 "assigned_rate_limits": { 00:04:00.390 "rw_ios_per_sec": 0, 00:04:00.390 "rw_mbytes_per_sec": 0, 00:04:00.390 "r_mbytes_per_sec": 0, 00:04:00.390 "w_mbytes_per_sec": 0 00:04:00.390 }, 00:04:00.390 "claimed": false, 00:04:00.390 "zoned": false, 00:04:00.390 "supported_io_types": { 00:04:00.390 "read": true, 00:04:00.390 "write": true, 00:04:00.390 "unmap": true, 00:04:00.390 "flush": true, 00:04:00.390 "reset": true, 00:04:00.390 "nvme_admin": false, 00:04:00.390 "nvme_io": false, 00:04:00.390 "nvme_io_md": false, 00:04:00.390 "write_zeroes": true, 00:04:00.390 "zcopy": true, 00:04:00.390 "get_zone_info": false, 00:04:00.390 "zone_management": false, 00:04:00.390 "zone_append": false, 00:04:00.390 "compare": false, 00:04:00.390 "compare_and_write": false, 00:04:00.390 "abort": true, 00:04:00.390 "seek_hole": false, 00:04:00.390 "seek_data": false, 00:04:00.390 "copy": true, 00:04:00.390 "nvme_iov_md": false 00:04:00.390 }, 00:04:00.390 "memory_domains": [ 00:04:00.390 { 00:04:00.390 "dma_device_id": "system", 00:04:00.390 "dma_device_type": 1 00:04:00.390 }, 00:04:00.390 { 00:04:00.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.390 "dma_device_type": 2 00:04:00.390 } 00:04:00.390 ], 00:04:00.390 "driver_specific": { 00:04:00.390 "passthru": { 00:04:00.390 "name": "Passthru0", 00:04:00.390 "base_bdev_name": "Malloc0" 00:04:00.390 } 00:04:00.390 } 00:04:00.390 } 00:04:00.390 ]' 00:04:00.390 03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:00.390 03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:00.390 03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:00.390 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.390 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.390 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.390 03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:00.390 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.390 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.390 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.390 03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:00.390 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.390 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.390 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.390 03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:00.390 03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:00.390 03:54:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:00.390 00:04:00.390 real 0m0.222s 00:04:00.390 user 0m0.141s 00:04:00.390 sys 0m0.023s 00:04:00.390 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.390 03:54:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.390 ************************************ 00:04:00.390 END TEST rpc_integrity 00:04:00.390 ************************************ 00:04:00.390 03:54:28 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:00.390 03:54:28 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.390 03:54:28 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.390 03:54:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.648 ************************************ 00:04:00.648 START TEST rpc_plugins 00:04:00.648 ************************************ 00:04:00.648 03:54:28 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:00.648 03:54:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:00.648 03:54:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.648 03:54:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.648 03:54:28 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.648 03:54:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:00.648 03:54:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:00.648 03:54:28 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.648 03:54:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.648 03:54:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.648 03:54:29 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:00.648 { 00:04:00.648 "name": "Malloc1", 00:04:00.648 "aliases": [ 00:04:00.648 "9f7c1785-8407-4a23-a1e5-7eef1b178446" 00:04:00.648 ], 00:04:00.648 "product_name": "Malloc disk", 00:04:00.648 "block_size": 4096, 00:04:00.648 "num_blocks": 256, 00:04:00.648 "uuid": "9f7c1785-8407-4a23-a1e5-7eef1b178446", 00:04:00.648 "assigned_rate_limits": { 00:04:00.648 "rw_ios_per_sec": 0, 00:04:00.648 "rw_mbytes_per_sec": 0, 00:04:00.648 "r_mbytes_per_sec": 0, 00:04:00.648 "w_mbytes_per_sec": 0 00:04:00.648 }, 00:04:00.648 "claimed": false, 00:04:00.648 "zoned": false, 00:04:00.648 "supported_io_types": { 00:04:00.648 "read": true, 00:04:00.648 "write": true, 00:04:00.648 "unmap": true, 00:04:00.648 "flush": true, 00:04:00.648 "reset": true, 00:04:00.648 "nvme_admin": false, 00:04:00.648 "nvme_io": false, 00:04:00.648 "nvme_io_md": false, 00:04:00.648 "write_zeroes": true, 00:04:00.648 "zcopy": true, 00:04:00.648 "get_zone_info": false, 00:04:00.648 "zone_management": false, 00:04:00.648 "zone_append": false, 00:04:00.648 "compare": false, 00:04:00.648 "compare_and_write": false, 00:04:00.648 "abort": true, 00:04:00.648 "seek_hole": false, 00:04:00.648 "seek_data": false, 00:04:00.648 "copy": true, 00:04:00.648 "nvme_iov_md": false 00:04:00.648 }, 00:04:00.648 "memory_domains": [ 00:04:00.648 { 00:04:00.648 "dma_device_id": "system", 00:04:00.648 "dma_device_type": 1 00:04:00.648 }, 00:04:00.648 { 00:04:00.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.648 "dma_device_type": 2 00:04:00.648 } 00:04:00.648 ], 00:04:00.648 "driver_specific": {} 00:04:00.648 } 00:04:00.648 ]' 00:04:00.648 03:54:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:00.648 03:54:29 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:00.648 03:54:29 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:00.648 03:54:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.648 03:54:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.648 03:54:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.648 03:54:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:00.648 03:54:29 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.648 03:54:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.648 03:54:29 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.648 03:54:29 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:00.648 03:54:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:00.648 03:54:29 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:00.648 00:04:00.648 real 0m0.104s 00:04:00.648 user 0m0.068s 00:04:00.648 sys 0m0.009s 00:04:00.648 03:54:29 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.648 03:54:29 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:00.648 ************************************ 00:04:00.648 END TEST rpc_plugins 00:04:00.648 ************************************ 00:04:00.648 03:54:29 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:00.648 03:54:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.648 03:54:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.648 03:54:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.648 ************************************ 00:04:00.648 START TEST rpc_trace_cmd_test 00:04:00.648 ************************************ 00:04:00.648 03:54:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:00.648 03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:00.648 03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:00.648 03:54:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.648 03:54:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:00.648 03:54:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.648 03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:00.648 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid102971", 00:04:00.648 "tpoint_group_mask": "0x8", 00:04:00.648 "iscsi_conn": { 00:04:00.649 "mask": "0x2", 00:04:00.649 "tpoint_mask": "0x0" 00:04:00.649 }, 00:04:00.649 "scsi": { 00:04:00.649 "mask": "0x4", 00:04:00.649 "tpoint_mask": "0x0" 00:04:00.649 }, 00:04:00.649 "bdev": { 00:04:00.649 "mask": "0x8", 00:04:00.649 "tpoint_mask": "0xffffffffffffffff" 00:04:00.649 }, 00:04:00.649 "nvmf_rdma": { 00:04:00.649 "mask": "0x10", 00:04:00.649 "tpoint_mask": "0x0" 00:04:00.649 }, 00:04:00.649 "nvmf_tcp": { 00:04:00.649 "mask": "0x20", 00:04:00.649 "tpoint_mask": "0x0" 00:04:00.649 }, 00:04:00.649 "ftl": { 00:04:00.649 "mask": "0x40", 00:04:00.649 "tpoint_mask": "0x0" 00:04:00.649 }, 00:04:00.649 "blobfs": { 00:04:00.649 "mask": "0x80", 00:04:00.649 "tpoint_mask": "0x0" 00:04:00.649 }, 00:04:00.649 "dsa": { 00:04:00.649 "mask": "0x200", 00:04:00.649 "tpoint_mask": "0x0" 00:04:00.649 }, 00:04:00.649 "thread": { 00:04:00.649 "mask": "0x400", 00:04:00.649 "tpoint_mask": "0x0" 00:04:00.649 }, 00:04:00.649 "nvme_pcie": { 00:04:00.649 "mask": "0x800", 00:04:00.649 "tpoint_mask": "0x0" 00:04:00.649 }, 00:04:00.649 "iaa": { 00:04:00.649 "mask": "0x1000", 00:04:00.649 "tpoint_mask": "0x0" 00:04:00.649 }, 00:04:00.649 "nvme_tcp": { 00:04:00.649 "mask": "0x2000", 00:04:00.649 "tpoint_mask": "0x0" 00:04:00.649 }, 00:04:00.649 "bdev_nvme": { 00:04:00.649 "mask": "0x4000", 00:04:00.649 "tpoint_mask": "0x0" 00:04:00.649 }, 00:04:00.649 "sock": { 00:04:00.649 "mask": "0x8000", 00:04:00.649 "tpoint_mask": "0x0" 00:04:00.649 }, 00:04:00.649 "blob": { 00:04:00.649 "mask": "0x10000", 00:04:00.649 "tpoint_mask": "0x0" 00:04:00.649 }, 00:04:00.649 "bdev_raid": { 00:04:00.649 "mask": "0x20000", 00:04:00.649 "tpoint_mask": "0x0" 00:04:00.649 }, 00:04:00.649 "scheduler": { 00:04:00.649 "mask": "0x40000", 00:04:00.649 "tpoint_mask": "0x0" 00:04:00.649 } 00:04:00.649 }' 00:04:00.649 03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:00.649 03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:00.649 03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:00.649 03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:00.649 03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:00.907 03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:00.907 03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:00.907 03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:00.907 03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:00.907 03:54:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:00.907 00:04:00.907 real 0m0.186s 00:04:00.907 user 0m0.162s 00:04:00.907 sys 0m0.015s 00:04:00.907 03:54:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.907 03:54:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:00.907 ************************************ 00:04:00.907 END TEST rpc_trace_cmd_test 00:04:00.907 ************************************ 00:04:00.907 03:54:29 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:00.907 03:54:29 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:00.907 03:54:29 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:00.907 03:54:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:00.907 03:54:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.907 03:54:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:00.907 ************************************ 00:04:00.907 START TEST rpc_daemon_integrity 00:04:00.907 ************************************ 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:00.907 { 00:04:00.907 "name": "Malloc2", 00:04:00.907 "aliases": [ 00:04:00.907 "8fe2c65e-1e4c-4110-96ef-89d9a1523ed4" 00:04:00.907 ], 00:04:00.907 "product_name": "Malloc disk", 00:04:00.907 "block_size": 512, 00:04:00.907 "num_blocks": 16384, 00:04:00.907 "uuid": "8fe2c65e-1e4c-4110-96ef-89d9a1523ed4", 00:04:00.907 "assigned_rate_limits": { 00:04:00.907 "rw_ios_per_sec": 0, 00:04:00.907 "rw_mbytes_per_sec": 0, 00:04:00.907 "r_mbytes_per_sec": 0, 00:04:00.907 "w_mbytes_per_sec": 0 00:04:00.907 }, 00:04:00.907 "claimed": false, 00:04:00.907 "zoned": false, 00:04:00.907 "supported_io_types": { 00:04:00.907 "read": true, 00:04:00.907 "write": true, 00:04:00.907 "unmap": true, 00:04:00.907 "flush": true, 00:04:00.907 "reset": true, 00:04:00.907 "nvme_admin": false, 00:04:00.907 "nvme_io": false, 00:04:00.907 "nvme_io_md": false, 00:04:00.907 "write_zeroes": true, 00:04:00.907 "zcopy": true, 00:04:00.907 "get_zone_info": false, 00:04:00.907 "zone_management": false, 00:04:00.907 "zone_append": false, 00:04:00.907 "compare": false, 00:04:00.907 "compare_and_write": false, 00:04:00.907 "abort": true, 00:04:00.907 "seek_hole": false, 00:04:00.907 "seek_data": false, 00:04:00.907 "copy": true, 00:04:00.907 "nvme_iov_md": false 00:04:00.907 }, 00:04:00.907 "memory_domains": [ 00:04:00.907 { 00:04:00.907 "dma_device_id": "system", 00:04:00.907 "dma_device_type": 1 00:04:00.907 }, 00:04:00.907 { 00:04:00.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:00.907 "dma_device_type": 2 00:04:00.907 } 00:04:00.907 ], 00:04:00.907 "driver_specific": {} 00:04:00.907 } 00:04:00.907 ]' 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:00.907 [2024-12-09 03:54:29.473015] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:00.907 [2024-12-09 03:54:29.473071] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:00.907 [2024-12-09 03:54:29.473095] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb95320 00:04:00.907 [2024-12-09 03:54:29.473114] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:00.907 [2024-12-09 03:54:29.474364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:00.907 [2024-12-09 03:54:29.474390] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:00.907 Passthru0 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:00.907 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.165 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.165 03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:01.165 { 00:04:01.165 "name": "Malloc2", 00:04:01.165 "aliases": [ 00:04:01.165 "8fe2c65e-1e4c-4110-96ef-89d9a1523ed4" 00:04:01.165 ], 00:04:01.165 "product_name": "Malloc disk", 00:04:01.165 "block_size": 512, 00:04:01.165 "num_blocks": 16384, 00:04:01.165 "uuid": "8fe2c65e-1e4c-4110-96ef-89d9a1523ed4", 00:04:01.165 "assigned_rate_limits": { 00:04:01.165 "rw_ios_per_sec": 0, 00:04:01.165 "rw_mbytes_per_sec": 0, 00:04:01.165 "r_mbytes_per_sec": 0, 00:04:01.165 "w_mbytes_per_sec": 0 00:04:01.165 }, 00:04:01.165 "claimed": true, 00:04:01.165 "claim_type": "exclusive_write", 00:04:01.165 "zoned": false, 00:04:01.165 "supported_io_types": { 00:04:01.165 "read": true, 00:04:01.165 "write": true, 00:04:01.165 "unmap": true, 00:04:01.165 "flush": true, 00:04:01.165 "reset": true, 00:04:01.165 "nvme_admin": false, 00:04:01.165 "nvme_io": false, 00:04:01.165 "nvme_io_md": false, 00:04:01.165 "write_zeroes": true, 00:04:01.165 "zcopy": true, 00:04:01.165 "get_zone_info": false, 00:04:01.165 "zone_management": false, 00:04:01.165 "zone_append": false, 00:04:01.165 "compare": false, 00:04:01.165 "compare_and_write": false, 00:04:01.165 "abort": true, 00:04:01.165 "seek_hole": false, 00:04:01.165 "seek_data": false, 00:04:01.165 "copy": true, 00:04:01.165 "nvme_iov_md": false 00:04:01.165 }, 00:04:01.165 "memory_domains": [ 00:04:01.165 { 00:04:01.165 "dma_device_id": "system", 00:04:01.165 "dma_device_type": 1 00:04:01.165 }, 00:04:01.165 { 00:04:01.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.165 "dma_device_type": 2 00:04:01.165 } 00:04:01.165 ], 00:04:01.165 "driver_specific": {} 00:04:01.165 }, 00:04:01.165 { 00:04:01.165 "name": "Passthru0", 00:04:01.165 "aliases": [ 00:04:01.165 "07bb2221-df62-5861-aa18-eb173f687cd2" 00:04:01.165 ], 00:04:01.165 "product_name": "passthru", 00:04:01.165 "block_size": 512, 00:04:01.165 "num_blocks": 16384, 00:04:01.165 "uuid": "07bb2221-df62-5861-aa18-eb173f687cd2", 00:04:01.165 "assigned_rate_limits": { 00:04:01.165 "rw_ios_per_sec": 0, 00:04:01.165 "rw_mbytes_per_sec": 0, 00:04:01.165 "r_mbytes_per_sec": 0, 00:04:01.165 "w_mbytes_per_sec": 0 00:04:01.165 }, 00:04:01.165 "claimed": false, 00:04:01.165 "zoned": false, 00:04:01.165 "supported_io_types": { 00:04:01.165 "read": true, 00:04:01.165 "write": true, 00:04:01.165 "unmap": true, 00:04:01.165 "flush": true, 00:04:01.165 "reset": true, 00:04:01.165 "nvme_admin": false, 00:04:01.165 "nvme_io": false, 00:04:01.165 "nvme_io_md": false, 00:04:01.165 "write_zeroes": true, 00:04:01.165 "zcopy": true, 00:04:01.165 "get_zone_info": false, 00:04:01.165 "zone_management": false, 00:04:01.165 "zone_append": false, 00:04:01.165 "compare": false, 00:04:01.165 "compare_and_write": false, 00:04:01.165 "abort": true, 00:04:01.165 "seek_hole": false, 00:04:01.165 "seek_data": false, 00:04:01.165 "copy": true, 00:04:01.165 "nvme_iov_md": false 00:04:01.165 }, 00:04:01.165 "memory_domains": [ 00:04:01.165 { 00:04:01.165 "dma_device_id": "system", 00:04:01.165 "dma_device_type": 1 00:04:01.165 }, 00:04:01.165 { 00:04:01.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:01.165 "dma_device_type": 2 00:04:01.165 } 00:04:01.165 ], 00:04:01.165 "driver_specific": { 00:04:01.165 "passthru": { 00:04:01.165 "name": "Passthru0", 00:04:01.165 "base_bdev_name": "Malloc2" 00:04:01.165 } 00:04:01.165 } 00:04:01.165 } 00:04:01.165 ]' 00:04:01.165 03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:01.165 03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:01.165 03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:01.165 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.165 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.165 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.165 03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:01.165 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.165 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.165 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.165 03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:01.165 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:01.165 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.165 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:01.165 03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:01.165 03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:01.165 03:54:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:01.165 00:04:01.165 real 0m0.213s 00:04:01.165 user 0m0.143s 00:04:01.165 sys 0m0.015s 00:04:01.165 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.165 03:54:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:01.165 ************************************ 00:04:01.165 END TEST rpc_daemon_integrity 00:04:01.165 ************************************ 00:04:01.165 03:54:29 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:01.165 03:54:29 rpc -- rpc/rpc.sh@84 -- # killprocess 102971 00:04:01.165 03:54:29 rpc -- common/autotest_common.sh@954 -- # '[' -z 102971 ']' 00:04:01.165 03:54:29 rpc -- common/autotest_common.sh@958 -- # kill -0 102971 00:04:01.165 03:54:29 rpc -- common/autotest_common.sh@959 -- # uname 00:04:01.165 03:54:29 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:01.165 03:54:29 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102971 00:04:01.165 03:54:29 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:01.165 03:54:29 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:01.165 03:54:29 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102971' 00:04:01.165 killing process with pid 102971 00:04:01.165 03:54:29 rpc -- common/autotest_common.sh@973 -- # kill 102971 00:04:01.165 03:54:29 rpc -- common/autotest_common.sh@978 -- # wait 102971 00:04:01.731 00:04:01.731 real 0m1.981s 00:04:01.731 user 0m2.467s 00:04:01.731 sys 0m0.587s 00:04:01.731 03:54:30 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.731 03:54:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.731 ************************************ 00:04:01.731 END TEST rpc 00:04:01.731 ************************************ 00:04:01.731 03:54:30 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:01.731 03:54:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.732 03:54:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.732 03:54:30 -- common/autotest_common.sh@10 -- # set +x 00:04:01.732 ************************************ 00:04:01.732 START TEST skip_rpc 00:04:01.732 ************************************ 00:04:01.732 03:54:30 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:01.732 * Looking for test storage... 00:04:01.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:01.732 03:54:30 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:01.732 03:54:30 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:01.732 03:54:30 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:01.732 03:54:30 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.732 03:54:30 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:01.732 03:54:30 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.732 03:54:30 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:01.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.732 --rc genhtml_branch_coverage=1 00:04:01.732 --rc genhtml_function_coverage=1 00:04:01.732 --rc genhtml_legend=1 00:04:01.732 --rc geninfo_all_blocks=1 00:04:01.732 --rc geninfo_unexecuted_blocks=1 00:04:01.732 00:04:01.732 ' 00:04:01.732 03:54:30 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:01.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.732 --rc genhtml_branch_coverage=1 00:04:01.732 --rc genhtml_function_coverage=1 00:04:01.732 --rc genhtml_legend=1 00:04:01.732 --rc geninfo_all_blocks=1 00:04:01.732 --rc geninfo_unexecuted_blocks=1 00:04:01.732 00:04:01.732 ' 00:04:01.732 03:54:30 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:01.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.732 --rc genhtml_branch_coverage=1 00:04:01.732 --rc genhtml_function_coverage=1 00:04:01.732 --rc genhtml_legend=1 00:04:01.732 --rc geninfo_all_blocks=1 00:04:01.732 --rc geninfo_unexecuted_blocks=1 00:04:01.732 00:04:01.732 ' 00:04:01.732 03:54:30 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:01.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.732 --rc genhtml_branch_coverage=1 00:04:01.732 --rc genhtml_function_coverage=1 00:04:01.732 --rc genhtml_legend=1 00:04:01.732 --rc geninfo_all_blocks=1 00:04:01.732 --rc geninfo_unexecuted_blocks=1 00:04:01.732 00:04:01.732 ' 00:04:01.732 03:54:30 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:01.732 03:54:30 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:01.732 03:54:30 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:01.732 03:54:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.732 03:54:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.732 03:54:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.732 ************************************ 00:04:01.732 START TEST skip_rpc 00:04:01.732 ************************************ 00:04:01.732 03:54:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:01.732 03:54:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=103309 00:04:01.732 03:54:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:01.732 03:54:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:01.732 03:54:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:01.990 [2024-12-09 03:54:30.360881] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:04:01.990 [2024-12-09 03:54:30.360973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103309 ] 00:04:01.990 [2024-12-09 03:54:30.430697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:01.990 [2024-12-09 03:54:30.489519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.245 03:54:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 103309 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 103309 ']' 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 103309 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103309 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103309' 00:04:07.246 killing process with pid 103309 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 103309 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 103309 00:04:07.246 00:04:07.246 real 0m5.467s 00:04:07.246 user 0m5.148s 00:04:07.246 sys 0m0.340s 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:07.246 03:54:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.246 ************************************ 00:04:07.246 END TEST skip_rpc 00:04:07.246 ************************************ 00:04:07.246 03:54:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:07.246 03:54:35 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:07.246 03:54:35 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:07.246 03:54:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.246 ************************************ 00:04:07.246 START TEST skip_rpc_with_json 00:04:07.246 ************************************ 00:04:07.246 03:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:07.246 03:54:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:07.246 03:54:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=103984 00:04:07.246 03:54:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:07.246 03:54:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:07.246 03:54:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 103984 00:04:07.246 03:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 103984 ']' 00:04:07.246 03:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:07.246 03:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:07.246 03:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:07.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:07.246 03:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:07.246 03:54:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.503 [2024-12-09 03:54:35.874370] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:04:07.503 [2024-12-09 03:54:35.874478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103984 ] 00:04:07.503 [2024-12-09 03:54:35.941980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:07.503 [2024-12-09 03:54:36.000481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:07.761 03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:07.761 03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:07.761 03:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:07.761 03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.761 03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.761 [2024-12-09 03:54:36.271509] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:07.761 request: 00:04:07.761 { 00:04:07.761 "trtype": "tcp", 00:04:07.761 "method": "nvmf_get_transports", 00:04:07.761 "req_id": 1 00:04:07.761 } 00:04:07.761 Got JSON-RPC error response 00:04:07.761 response: 00:04:07.761 { 00:04:07.761 "code": -19, 00:04:07.761 "message": "No such device" 00:04:07.761 } 00:04:07.761 03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:07.761 03:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:07.761 03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.761 03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:07.761 [2024-12-09 03:54:36.279655] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:07.761 03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:07.761 03:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:07.761 03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:07.761 03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:08.019 03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:08.019 03:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:08.019 { 00:04:08.019 "subsystems": [ 00:04:08.019 { 00:04:08.019 "subsystem": "fsdev", 00:04:08.019 "config": [ 00:04:08.019 { 00:04:08.019 "method": "fsdev_set_opts", 00:04:08.019 "params": { 00:04:08.019 "fsdev_io_pool_size": 65535, 00:04:08.019 "fsdev_io_cache_size": 256 00:04:08.019 } 00:04:08.019 } 00:04:08.019 ] 00:04:08.019 }, 00:04:08.019 { 00:04:08.019 "subsystem": "vfio_user_target", 00:04:08.019 "config": null 00:04:08.019 }, 00:04:08.019 { 00:04:08.019 "subsystem": "keyring", 00:04:08.019 "config": [] 00:04:08.019 }, 00:04:08.019 { 00:04:08.019 "subsystem": "iobuf", 00:04:08.019 "config": [ 00:04:08.019 { 00:04:08.019 "method": "iobuf_set_options", 00:04:08.019 "params": { 00:04:08.019 "small_pool_count": 8192, 00:04:08.019 "large_pool_count": 1024, 00:04:08.019 "small_bufsize": 8192, 00:04:08.019 "large_bufsize": 135168, 00:04:08.019 "enable_numa": false 00:04:08.019 } 00:04:08.019 } 00:04:08.019 ] 00:04:08.019 }, 00:04:08.019 { 00:04:08.019 "subsystem": "sock", 00:04:08.019 "config": [ 00:04:08.019 { 00:04:08.019 "method": "sock_set_default_impl", 00:04:08.019 "params": { 00:04:08.019 "impl_name": "posix" 00:04:08.019 } 00:04:08.019 }, 00:04:08.019 { 00:04:08.019 "method": "sock_impl_set_options", 00:04:08.019 "params": { 00:04:08.019 "impl_name": "ssl", 00:04:08.019 "recv_buf_size": 4096, 00:04:08.019 "send_buf_size": 4096, 00:04:08.019 "enable_recv_pipe": true, 00:04:08.019 "enable_quickack": false, 00:04:08.019 "enable_placement_id": 0, 00:04:08.019 "enable_zerocopy_send_server": true, 00:04:08.019 "enable_zerocopy_send_client": false, 00:04:08.019 "zerocopy_threshold": 0, 00:04:08.019 "tls_version": 0, 00:04:08.019 "enable_ktls": false 00:04:08.019 } 00:04:08.019 }, 00:04:08.019 { 00:04:08.019 "method": "sock_impl_set_options", 00:04:08.019 "params": { 00:04:08.019 "impl_name": "posix", 00:04:08.019 "recv_buf_size": 2097152, 00:04:08.019 "send_buf_size": 2097152, 00:04:08.019 "enable_recv_pipe": true, 00:04:08.019 "enable_quickack": false, 00:04:08.019 "enable_placement_id": 0, 00:04:08.019 "enable_zerocopy_send_server": true, 00:04:08.019 "enable_zerocopy_send_client": false, 00:04:08.019 "zerocopy_threshold": 0, 00:04:08.019 "tls_version": 0, 00:04:08.019 "enable_ktls": false 00:04:08.019 } 00:04:08.019 } 00:04:08.019 ] 00:04:08.019 }, 00:04:08.019 { 00:04:08.019 "subsystem": "vmd", 00:04:08.019 "config": [] 00:04:08.019 }, 00:04:08.019 { 00:04:08.019 "subsystem": "accel", 00:04:08.019 "config": [ 00:04:08.019 { 00:04:08.019 "method": "accel_set_options", 00:04:08.019 "params": { 00:04:08.019 "small_cache_size": 128, 00:04:08.019 "large_cache_size": 16, 00:04:08.019 "task_count": 2048, 00:04:08.019 "sequence_count": 2048, 00:04:08.019 "buf_count": 2048 00:04:08.019 } 00:04:08.019 } 00:04:08.019 ] 00:04:08.019 }, 00:04:08.019 { 00:04:08.019 "subsystem": "bdev", 00:04:08.019 "config": [ 00:04:08.019 { 00:04:08.019 "method": "bdev_set_options", 00:04:08.019 "params": { 00:04:08.019 "bdev_io_pool_size": 65535, 00:04:08.019 "bdev_io_cache_size": 256, 00:04:08.019 "bdev_auto_examine": true, 00:04:08.019 "iobuf_small_cache_size": 128, 00:04:08.019 "iobuf_large_cache_size": 16 00:04:08.019 } 00:04:08.019 }, 00:04:08.019 { 00:04:08.019 "method": "bdev_raid_set_options", 00:04:08.019 "params": { 00:04:08.019 "process_window_size_kb": 1024, 00:04:08.019 "process_max_bandwidth_mb_sec": 0 00:04:08.019 } 00:04:08.019 }, 00:04:08.019 { 00:04:08.019 "method": "bdev_iscsi_set_options", 00:04:08.019 "params": { 00:04:08.019 "timeout_sec": 30 00:04:08.019 } 00:04:08.019 }, 00:04:08.019 { 00:04:08.019 "method": "bdev_nvme_set_options", 00:04:08.019 "params": { 00:04:08.019 "action_on_timeout": "none", 00:04:08.019 "timeout_us": 0, 00:04:08.019 "timeout_admin_us": 0, 00:04:08.019 "keep_alive_timeout_ms": 10000, 00:04:08.019 "arbitration_burst": 0, 00:04:08.019 "low_priority_weight": 0, 00:04:08.019 "medium_priority_weight": 0, 00:04:08.019 "high_priority_weight": 0, 00:04:08.019 "nvme_adminq_poll_period_us": 10000, 00:04:08.019 "nvme_ioq_poll_period_us": 0, 00:04:08.019 "io_queue_requests": 0, 00:04:08.019 "delay_cmd_submit": true, 00:04:08.019 "transport_retry_count": 4, 00:04:08.019 "bdev_retry_count": 3, 00:04:08.019 "transport_ack_timeout": 0, 00:04:08.019 "ctrlr_loss_timeout_sec": 0, 00:04:08.019 "reconnect_delay_sec": 0, 00:04:08.019 "fast_io_fail_timeout_sec": 0, 00:04:08.019 "disable_auto_failback": false, 00:04:08.019 "generate_uuids": false, 00:04:08.019 "transport_tos": 0, 00:04:08.019 "nvme_error_stat": false, 00:04:08.019 "rdma_srq_size": 0, 00:04:08.019 "io_path_stat": false, 00:04:08.019 "allow_accel_sequence": false, 00:04:08.019 "rdma_max_cq_size": 0, 00:04:08.019 "rdma_cm_event_timeout_ms": 0, 00:04:08.019 "dhchap_digests": [ 00:04:08.019 "sha256", 00:04:08.019 "sha384", 00:04:08.019 "sha512" 00:04:08.019 ], 00:04:08.019 "dhchap_dhgroups": [ 00:04:08.019 "null", 00:04:08.019 "ffdhe2048", 00:04:08.019 "ffdhe3072", 00:04:08.019 "ffdhe4096", 00:04:08.019 "ffdhe6144", 00:04:08.019 "ffdhe8192" 00:04:08.019 ] 00:04:08.019 } 00:04:08.019 }, 00:04:08.019 { 00:04:08.019 "method": "bdev_nvme_set_hotplug", 00:04:08.019 "params": { 00:04:08.019 "period_us": 100000, 00:04:08.019 "enable": false 00:04:08.019 } 00:04:08.019 }, 00:04:08.019 { 00:04:08.019 "method": "bdev_wait_for_examine" 00:04:08.019 } 00:04:08.019 ] 00:04:08.019 }, 00:04:08.019 { 00:04:08.019 "subsystem": "scsi", 00:04:08.019 "config": null 00:04:08.019 }, 00:04:08.019 { 00:04:08.019 "subsystem": "scheduler", 00:04:08.019 "config": [ 00:04:08.019 { 00:04:08.019 "method": "framework_set_scheduler", 00:04:08.019 "params": { 00:04:08.019 "name": "static" 00:04:08.019 } 00:04:08.019 } 00:04:08.019 ] 00:04:08.019 }, 00:04:08.020 { 00:04:08.020 "subsystem": "vhost_scsi", 00:04:08.020 "config": [] 00:04:08.020 }, 00:04:08.020 { 00:04:08.020 "subsystem": "vhost_blk", 00:04:08.020 "config": [] 00:04:08.020 }, 00:04:08.020 { 00:04:08.020 "subsystem": "ublk", 00:04:08.020 "config": [] 00:04:08.020 }, 00:04:08.020 { 00:04:08.020 "subsystem": "nbd", 00:04:08.020 "config": [] 00:04:08.020 }, 00:04:08.020 { 00:04:08.020 "subsystem": "nvmf", 00:04:08.020 "config": [ 00:04:08.020 { 00:04:08.020 "method": "nvmf_set_config", 00:04:08.020 "params": { 00:04:08.020 "discovery_filter": "match_any", 00:04:08.020 "admin_cmd_passthru": { 00:04:08.020 "identify_ctrlr": false 00:04:08.020 }, 00:04:08.020 "dhchap_digests": [ 00:04:08.020 "sha256", 00:04:08.020 "sha384", 00:04:08.020 "sha512" 00:04:08.020 ], 00:04:08.020 "dhchap_dhgroups": [ 00:04:08.020 "null", 00:04:08.020 "ffdhe2048", 00:04:08.020 "ffdhe3072", 00:04:08.020 "ffdhe4096", 00:04:08.020 "ffdhe6144", 00:04:08.020 "ffdhe8192" 00:04:08.020 ] 00:04:08.020 } 00:04:08.020 }, 00:04:08.020 { 00:04:08.020 "method": "nvmf_set_max_subsystems", 00:04:08.020 "params": { 00:04:08.020 "max_subsystems": 1024 00:04:08.020 } 00:04:08.020 }, 00:04:08.020 { 00:04:08.020 "method": "nvmf_set_crdt", 00:04:08.020 "params": { 00:04:08.020 "crdt1": 0, 00:04:08.020 "crdt2": 0, 00:04:08.020 "crdt3": 0 00:04:08.020 } 00:04:08.020 }, 00:04:08.020 { 00:04:08.020 "method": "nvmf_create_transport", 00:04:08.020 "params": { 00:04:08.020 "trtype": "TCP", 00:04:08.020 "max_queue_depth": 128, 00:04:08.020 "max_io_qpairs_per_ctrlr": 127, 00:04:08.020 "in_capsule_data_size": 4096, 00:04:08.020 "max_io_size": 131072, 00:04:08.020 "io_unit_size": 131072, 00:04:08.020 "max_aq_depth": 128, 00:04:08.020 "num_shared_buffers": 511, 00:04:08.020 "buf_cache_size": 4294967295, 00:04:08.020 "dif_insert_or_strip": false, 00:04:08.020 "zcopy": false, 00:04:08.020 "c2h_success": true, 00:04:08.020 "sock_priority": 0, 00:04:08.020 "abort_timeout_sec": 1, 00:04:08.020 "ack_timeout": 0, 00:04:08.020 "data_wr_pool_size": 0 00:04:08.020 } 00:04:08.020 } 00:04:08.020 ] 00:04:08.020 }, 00:04:08.020 { 00:04:08.020 "subsystem": "iscsi", 00:04:08.020 "config": [ 00:04:08.020 { 00:04:08.020 "method": "iscsi_set_options", 00:04:08.020 "params": { 00:04:08.020 "node_base": "iqn.2016-06.io.spdk", 00:04:08.020 "max_sessions": 128, 00:04:08.020 "max_connections_per_session": 2, 00:04:08.020 "max_queue_depth": 64, 00:04:08.020 "default_time2wait": 2, 00:04:08.020 "default_time2retain": 20, 00:04:08.020 "first_burst_length": 8192, 00:04:08.020 "immediate_data": true, 00:04:08.020 "allow_duplicated_isid": false, 00:04:08.020 "error_recovery_level": 0, 00:04:08.020 "nop_timeout": 60, 00:04:08.020 "nop_in_interval": 30, 00:04:08.020 "disable_chap": false, 00:04:08.020 "require_chap": false, 00:04:08.020 "mutual_chap": false, 00:04:08.020 "chap_group": 0, 00:04:08.020 "max_large_datain_per_connection": 64, 00:04:08.020 "max_r2t_per_connection": 4, 00:04:08.020 "pdu_pool_size": 36864, 00:04:08.020 "immediate_data_pool_size": 16384, 00:04:08.020 "data_out_pool_size": 2048 00:04:08.020 } 00:04:08.020 } 00:04:08.020 ] 00:04:08.020 } 00:04:08.020 ] 00:04:08.020 } 00:04:08.020 03:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:08.020 03:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 103984 00:04:08.020 03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 103984 ']' 00:04:08.020 03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 103984 00:04:08.020 03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:08.020 03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:08.020 03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103984 00:04:08.020 03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:08.020 03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:08.020 03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103984' 00:04:08.020 killing process with pid 103984 00:04:08.020 03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 103984 00:04:08.020 03:54:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 103984 00:04:08.587 03:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=104124 00:04:08.587 03:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:08.587 03:54:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:13.844 03:54:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 104124 00:04:13.844 03:54:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 104124 ']' 00:04:13.844 03:54:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 104124 00:04:13.844 03:54:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:13.844 03:54:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:13.844 03:54:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104124 00:04:13.844 03:54:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:13.844 03:54:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:13.844 03:54:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104124' 00:04:13.844 killing process with pid 104124 00:04:13.844 03:54:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 104124 00:04:13.844 03:54:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 104124 00:04:13.844 03:54:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:13.844 03:54:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:13.844 00:04:13.844 real 0m6.538s 00:04:13.844 user 0m6.187s 00:04:13.844 sys 0m0.670s 00:04:13.844 03:54:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:13.844 03:54:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:13.844 ************************************ 00:04:13.844 END TEST skip_rpc_with_json 00:04:13.844 ************************************ 00:04:13.844 03:54:42 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:13.844 03:54:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:13.844 03:54:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:13.844 03:54:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:13.844 ************************************ 00:04:13.844 START TEST skip_rpc_with_delay 00:04:13.844 ************************************ 00:04:13.844 03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:13.844 03:54:42 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:13.844 03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:13.845 03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:13.845 03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.845 03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.845 03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.845 03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.845 03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.845 03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:13.845 03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:13.845 03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:13.845 03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:14.102 [2024-12-09 03:54:42.460588] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:14.102 03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:14.102 03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:14.102 03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:14.102 03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:14.102 00:04:14.102 real 0m0.073s 00:04:14.102 user 0m0.049s 00:04:14.102 sys 0m0.023s 00:04:14.102 03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.102 03:54:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:14.102 ************************************ 00:04:14.102 END TEST skip_rpc_with_delay 00:04:14.102 ************************************ 00:04:14.102 03:54:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:14.102 03:54:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:14.102 03:54:42 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:14.102 03:54:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.102 03:54:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.102 03:54:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.102 ************************************ 00:04:14.102 START TEST exit_on_failed_rpc_init 00:04:14.102 ************************************ 00:04:14.102 03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:14.102 03:54:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=104842 00:04:14.102 03:54:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:14.102 03:54:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 104842 00:04:14.102 03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 104842 ']' 00:04:14.102 03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.102 03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:14.102 03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.102 03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:14.102 03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.102 [2024-12-09 03:54:42.585802] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:04:14.102 [2024-12-09 03:54:42.585883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104842 ] 00:04:14.102 [2024-12-09 03:54:42.650039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.360 [2024-12-09 03:54:42.709447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.617 03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:14.617 03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:14.617 03:54:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.617 03:54:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:14.617 03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:14.617 03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:14.617 03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.618 03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:14.618 03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.618 03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:14.618 03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.618 03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:14.618 03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.618 03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:14.618 03:54:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:14.618 [2024-12-09 03:54:43.025442] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:04:14.618 [2024-12-09 03:54:43.025517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104967 ] 00:04:14.618 [2024-12-09 03:54:43.091308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.618 [2024-12-09 03:54:43.149877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.618 [2024-12-09 03:54:43.150013] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:14.618 [2024-12-09 03:54:43.150034] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:14.618 [2024-12-09 03:54:43.150045] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:14.875 03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:14.875 03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:14.875 03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:14.875 03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:14.875 03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:14.875 03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:14.875 03:54:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:14.875 03:54:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 104842 00:04:14.875 03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 104842 ']' 00:04:14.875 03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 104842 00:04:14.875 03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:14.875 03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:14.875 03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104842 00:04:14.875 03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:14.875 03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:14.875 03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104842' 00:04:14.875 killing process with pid 104842 00:04:14.875 03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 104842 00:04:14.875 03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 104842 00:04:15.135 00:04:15.135 real 0m1.152s 00:04:15.135 user 0m1.265s 00:04:15.135 sys 0m0.444s 00:04:15.135 03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.135 03:54:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:15.135 ************************************ 00:04:15.135 END TEST exit_on_failed_rpc_init 00:04:15.135 ************************************ 00:04:15.135 03:54:43 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:15.135 00:04:15.135 real 0m13.585s 00:04:15.135 user 0m12.827s 00:04:15.135 sys 0m1.673s 00:04:15.135 03:54:43 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.135 03:54:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.135 ************************************ 00:04:15.135 END TEST skip_rpc 00:04:15.135 ************************************ 00:04:15.394 03:54:43 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:15.394 03:54:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.394 03:54:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.394 03:54:43 -- common/autotest_common.sh@10 -- # set +x 00:04:15.394 ************************************ 00:04:15.394 START TEST rpc_client 00:04:15.394 ************************************ 00:04:15.394 03:54:43 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:15.394 * Looking for test storage... 00:04:15.394 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:15.394 03:54:43 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:15.394 03:54:43 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:15.394 03:54:43 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:15.394 03:54:43 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.394 03:54:43 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:15.394 03:54:43 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.394 03:54:43 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:15.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.394 --rc genhtml_branch_coverage=1 00:04:15.395 --rc genhtml_function_coverage=1 00:04:15.395 --rc genhtml_legend=1 00:04:15.395 --rc geninfo_all_blocks=1 00:04:15.395 --rc geninfo_unexecuted_blocks=1 00:04:15.395 00:04:15.395 ' 00:04:15.395 03:54:43 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:15.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.395 --rc genhtml_branch_coverage=1 00:04:15.395 --rc genhtml_function_coverage=1 00:04:15.395 --rc genhtml_legend=1 00:04:15.395 --rc geninfo_all_blocks=1 00:04:15.395 --rc geninfo_unexecuted_blocks=1 00:04:15.395 00:04:15.395 ' 00:04:15.395 03:54:43 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:15.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.395 --rc genhtml_branch_coverage=1 00:04:15.395 --rc genhtml_function_coverage=1 00:04:15.395 --rc genhtml_legend=1 00:04:15.395 --rc geninfo_all_blocks=1 00:04:15.395 --rc geninfo_unexecuted_blocks=1 00:04:15.395 00:04:15.395 ' 00:04:15.395 03:54:43 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:15.395 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.395 --rc genhtml_branch_coverage=1 00:04:15.395 --rc genhtml_function_coverage=1 00:04:15.395 --rc genhtml_legend=1 00:04:15.395 --rc geninfo_all_blocks=1 00:04:15.395 --rc geninfo_unexecuted_blocks=1 00:04:15.395 00:04:15.395 ' 00:04:15.395 03:54:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:15.395 OK 00:04:15.395 03:54:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:15.395 00:04:15.395 real 0m0.165s 00:04:15.395 user 0m0.104s 00:04:15.395 sys 0m0.069s 00:04:15.395 03:54:43 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.395 03:54:43 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:15.395 ************************************ 00:04:15.395 END TEST rpc_client 00:04:15.395 ************************************ 00:04:15.395 03:54:43 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:15.395 03:54:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.395 03:54:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.395 03:54:43 -- common/autotest_common.sh@10 -- # set +x 00:04:15.395 ************************************ 00:04:15.395 START TEST json_config 00:04:15.395 ************************************ 00:04:15.395 03:54:43 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:15.654 03:54:44 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:15.654 03:54:44 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:15.654 03:54:44 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:15.654 03:54:44 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:15.654 03:54:44 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.654 03:54:44 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.654 03:54:44 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.654 03:54:44 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.654 03:54:44 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.654 03:54:44 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.654 03:54:44 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.654 03:54:44 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.654 03:54:44 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.654 03:54:44 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.654 03:54:44 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.654 03:54:44 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:15.654 03:54:44 json_config -- scripts/common.sh@345 -- # : 1 00:04:15.654 03:54:44 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.654 03:54:44 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.654 03:54:44 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:15.654 03:54:44 json_config -- scripts/common.sh@353 -- # local d=1 00:04:15.654 03:54:44 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.654 03:54:44 json_config -- scripts/common.sh@355 -- # echo 1 00:04:15.654 03:54:44 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.654 03:54:44 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:15.654 03:54:44 json_config -- scripts/common.sh@353 -- # local d=2 00:04:15.654 03:54:44 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.654 03:54:44 json_config -- scripts/common.sh@355 -- # echo 2 00:04:15.654 03:54:44 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.654 03:54:44 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.654 03:54:44 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.654 03:54:44 json_config -- scripts/common.sh@368 -- # return 0 00:04:15.654 03:54:44 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.654 03:54:44 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:15.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.654 --rc genhtml_branch_coverage=1 00:04:15.654 --rc genhtml_function_coverage=1 00:04:15.654 --rc genhtml_legend=1 00:04:15.654 --rc geninfo_all_blocks=1 00:04:15.654 --rc geninfo_unexecuted_blocks=1 00:04:15.654 00:04:15.654 ' 00:04:15.654 03:54:44 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:15.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.654 --rc genhtml_branch_coverage=1 00:04:15.654 --rc genhtml_function_coverage=1 00:04:15.654 --rc genhtml_legend=1 00:04:15.654 --rc geninfo_all_blocks=1 00:04:15.654 --rc geninfo_unexecuted_blocks=1 00:04:15.654 00:04:15.654 ' 00:04:15.654 03:54:44 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:15.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.654 --rc genhtml_branch_coverage=1 00:04:15.654 --rc genhtml_function_coverage=1 00:04:15.654 --rc genhtml_legend=1 00:04:15.654 --rc geninfo_all_blocks=1 00:04:15.654 --rc geninfo_unexecuted_blocks=1 00:04:15.654 00:04:15.654 ' 00:04:15.654 03:54:44 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:15.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.654 --rc genhtml_branch_coverage=1 00:04:15.654 --rc genhtml_function_coverage=1 00:04:15.654 --rc genhtml_legend=1 00:04:15.654 --rc geninfo_all_blocks=1 00:04:15.654 --rc geninfo_unexecuted_blocks=1 00:04:15.654 00:04:15.654 ' 00:04:15.654 03:54:44 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:15.654 03:54:44 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:15.654 03:54:44 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:15.654 03:54:44 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:15.654 03:54:44 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:15.654 03:54:44 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.654 03:54:44 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.654 03:54:44 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.654 03:54:44 json_config -- paths/export.sh@5 -- # export PATH 00:04:15.654 03:54:44 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@51 -- # : 0 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:15.654 03:54:44 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:15.655 03:54:44 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:15.655 03:54:44 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:15.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:15.655 03:54:44 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:15.655 03:54:44 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:15.655 03:54:44 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:15.655 03:54:44 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:15.655 03:54:44 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:15.655 03:54:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:15.655 03:54:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:15.655 03:54:44 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:15.655 03:54:44 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:15.655 03:54:44 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:15.655 03:54:44 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:15.655 03:54:44 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:15.655 03:54:44 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:15.655 03:54:44 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:15.655 03:54:44 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:15.655 03:54:44 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:15.655 03:54:44 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:15.655 03:54:44 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:15.655 03:54:44 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:15.655 INFO: JSON configuration test init 00:04:15.655 03:54:44 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:15.655 03:54:44 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:15.655 03:54:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.655 03:54:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.655 03:54:44 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:15.655 03:54:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:15.655 03:54:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.655 03:54:44 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:15.655 03:54:44 json_config -- json_config/common.sh@9 -- # local app=target 00:04:15.655 03:54:44 json_config -- json_config/common.sh@10 -- # shift 00:04:15.655 03:54:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:15.655 03:54:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:15.655 03:54:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:15.655 03:54:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:15.655 03:54:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:15.655 03:54:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=105227 00:04:15.655 03:54:44 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:15.655 03:54:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:15.655 Waiting for target to run... 00:04:15.655 03:54:44 json_config -- json_config/common.sh@25 -- # waitforlisten 105227 /var/tmp/spdk_tgt.sock 00:04:15.655 03:54:44 json_config -- common/autotest_common.sh@835 -- # '[' -z 105227 ']' 00:04:15.655 03:54:44 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:15.655 03:54:44 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.655 03:54:44 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:15.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:15.655 03:54:44 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.655 03:54:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:15.655 [2024-12-09 03:54:44.174166] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:04:15.655 [2024-12-09 03:54:44.174267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105227 ] 00:04:16.221 [2024-12-09 03:54:44.499828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.221 [2024-12-09 03:54:44.542630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.787 03:54:45 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:16.787 03:54:45 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:16.787 03:54:45 json_config -- json_config/common.sh@26 -- # echo '' 00:04:16.787 00:04:16.787 03:54:45 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:16.787 03:54:45 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:16.787 03:54:45 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:16.787 03:54:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.787 03:54:45 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:16.787 03:54:45 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:16.787 03:54:45 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:16.787 03:54:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:16.787 03:54:45 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:16.787 03:54:45 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:16.787 03:54:45 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:20.068 03:54:48 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:20.068 03:54:48 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:20.068 03:54:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.068 03:54:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.068 03:54:48 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:20.068 03:54:48 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:20.068 03:54:48 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:20.068 03:54:48 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:20.068 03:54:48 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:20.068 03:54:48 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:20.068 03:54:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:20.068 03:54:48 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:20.068 03:54:48 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:20.068 03:54:48 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:20.068 03:54:48 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:20.068 03:54:48 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:20.068 03:54:48 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:20.068 03:54:48 json_config -- json_config/json_config.sh@54 -- # sort 00:04:20.068 03:54:48 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:20.068 03:54:48 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:20.068 03:54:48 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:20.068 03:54:48 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:20.068 03:54:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:20.069 03:54:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.326 03:54:48 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:20.326 03:54:48 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:20.326 03:54:48 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:20.326 03:54:48 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:20.326 03:54:48 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:20.326 03:54:48 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:20.326 03:54:48 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:20.326 03:54:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:20.326 03:54:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:20.326 03:54:48 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:20.326 03:54:48 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:20.326 03:54:48 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:20.326 03:54:48 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:20.326 03:54:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:20.584 MallocForNvmf0 00:04:20.584 03:54:48 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:20.584 03:54:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:20.842 MallocForNvmf1 00:04:20.842 03:54:49 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:20.842 03:54:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:21.100 [2024-12-09 03:54:49.451077] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:21.100 03:54:49 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:21.100 03:54:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:21.358 03:54:49 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:21.358 03:54:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:21.615 03:54:50 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:21.615 03:54:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:21.874 03:54:50 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:21.874 03:54:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:22.132 [2024-12-09 03:54:50.538710] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:22.132 03:54:50 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:22.132 03:54:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:22.132 03:54:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.132 03:54:50 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:22.132 03:54:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:22.132 03:54:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.132 03:54:50 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:22.132 03:54:50 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:22.132 03:54:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:22.390 MallocBdevForConfigChangeCheck 00:04:22.390 03:54:50 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:22.390 03:54:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:22.390 03:54:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.390 03:54:50 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:22.390 03:54:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:22.955 03:54:51 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:22.955 INFO: shutting down applications... 00:04:22.955 03:54:51 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:22.955 03:54:51 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:22.955 03:54:51 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:22.955 03:54:51 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:24.855 Calling clear_iscsi_subsystem 00:04:24.856 Calling clear_nvmf_subsystem 00:04:24.856 Calling clear_nbd_subsystem 00:04:24.856 Calling clear_ublk_subsystem 00:04:24.856 Calling clear_vhost_blk_subsystem 00:04:24.856 Calling clear_vhost_scsi_subsystem 00:04:24.856 Calling clear_bdev_subsystem 00:04:24.856 03:54:52 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:24.856 03:54:52 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:24.856 03:54:52 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:24.856 03:54:52 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:24.856 03:54:52 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:24.856 03:54:52 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:24.856 03:54:53 json_config -- json_config/json_config.sh@352 -- # break 00:04:24.856 03:54:53 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:24.856 03:54:53 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:24.856 03:54:53 json_config -- json_config/common.sh@31 -- # local app=target 00:04:24.856 03:54:53 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:24.856 03:54:53 json_config -- json_config/common.sh@35 -- # [[ -n 105227 ]] 00:04:24.856 03:54:53 json_config -- json_config/common.sh@38 -- # kill -SIGINT 105227 00:04:24.856 03:54:53 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:24.856 03:54:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:24.856 03:54:53 json_config -- json_config/common.sh@41 -- # kill -0 105227 00:04:24.856 03:54:53 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:25.423 03:54:53 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:25.423 03:54:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:25.423 03:54:53 json_config -- json_config/common.sh@41 -- # kill -0 105227 00:04:25.423 03:54:53 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:25.423 03:54:53 json_config -- json_config/common.sh@43 -- # break 00:04:25.423 03:54:53 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:25.423 03:54:53 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:25.423 SPDK target shutdown done 00:04:25.423 03:54:53 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:25.423 INFO: relaunching applications... 00:04:25.423 03:54:53 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:25.423 03:54:53 json_config -- json_config/common.sh@9 -- # local app=target 00:04:25.423 03:54:53 json_config -- json_config/common.sh@10 -- # shift 00:04:25.423 03:54:53 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:25.423 03:54:53 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:25.423 03:54:53 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:25.423 03:54:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:25.423 03:54:53 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:25.423 03:54:53 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=106431 00:04:25.423 03:54:53 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:25.423 03:54:53 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:25.423 Waiting for target to run... 00:04:25.423 03:54:53 json_config -- json_config/common.sh@25 -- # waitforlisten 106431 /var/tmp/spdk_tgt.sock 00:04:25.423 03:54:53 json_config -- common/autotest_common.sh@835 -- # '[' -z 106431 ']' 00:04:25.423 03:54:53 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:25.423 03:54:53 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.423 03:54:53 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:25.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:25.423 03:54:53 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.423 03:54:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.423 [2024-12-09 03:54:53.932218] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:04:25.423 [2024-12-09 03:54:53.932317] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106431 ] 00:04:25.990 [2024-12-09 03:54:54.446917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.990 [2024-12-09 03:54:54.498660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.272 [2024-12-09 03:54:57.554292] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:29.272 [2024-12-09 03:54:57.586751] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:29.272 03:54:57 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.272 03:54:57 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:29.272 03:54:57 json_config -- json_config/common.sh@26 -- # echo '' 00:04:29.272 00:04:29.272 03:54:57 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:29.272 03:54:57 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:29.272 INFO: Checking if target configuration is the same... 00:04:29.272 03:54:57 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:29.272 03:54:57 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:29.272 03:54:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:29.272 + '[' 2 -ne 2 ']' 00:04:29.272 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:29.272 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:29.272 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:29.272 +++ basename /dev/fd/62 00:04:29.272 ++ mktemp /tmp/62.XXX 00:04:29.272 + tmp_file_1=/tmp/62.CXd 00:04:29.272 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:29.272 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:29.272 + tmp_file_2=/tmp/spdk_tgt_config.json.uBN 00:04:29.272 + ret=0 00:04:29.272 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:29.530 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:29.530 + diff -u /tmp/62.CXd /tmp/spdk_tgt_config.json.uBN 00:04:29.530 + echo 'INFO: JSON config files are the same' 00:04:29.530 INFO: JSON config files are the same 00:04:29.530 + rm /tmp/62.CXd /tmp/spdk_tgt_config.json.uBN 00:04:29.530 + exit 0 00:04:29.530 03:54:58 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:29.530 03:54:58 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:29.530 INFO: changing configuration and checking if this can be detected... 00:04:29.530 03:54:58 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:29.530 03:54:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:29.788 03:54:58 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:29.788 03:54:58 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:29.788 03:54:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:29.788 + '[' 2 -ne 2 ']' 00:04:29.788 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:29.788 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:29.788 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:29.788 +++ basename /dev/fd/62 00:04:29.788 ++ mktemp /tmp/62.XXX 00:04:29.788 + tmp_file_1=/tmp/62.S42 00:04:30.046 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:30.046 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:30.046 + tmp_file_2=/tmp/spdk_tgt_config.json.udr 00:04:30.046 + ret=0 00:04:30.046 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:30.304 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:30.304 + diff -u /tmp/62.S42 /tmp/spdk_tgt_config.json.udr 00:04:30.304 + ret=1 00:04:30.304 + echo '=== Start of file: /tmp/62.S42 ===' 00:04:30.304 + cat /tmp/62.S42 00:04:30.304 + echo '=== End of file: /tmp/62.S42 ===' 00:04:30.304 + echo '' 00:04:30.304 + echo '=== Start of file: /tmp/spdk_tgt_config.json.udr ===' 00:04:30.304 + cat /tmp/spdk_tgt_config.json.udr 00:04:30.304 + echo '=== End of file: /tmp/spdk_tgt_config.json.udr ===' 00:04:30.304 + echo '' 00:04:30.304 + rm /tmp/62.S42 /tmp/spdk_tgt_config.json.udr 00:04:30.304 + exit 1 00:04:30.304 03:54:58 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:30.304 INFO: configuration change detected. 00:04:30.304 03:54:58 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:30.304 03:54:58 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:30.304 03:54:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.304 03:54:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.304 03:54:58 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:30.304 03:54:58 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:30.304 03:54:58 json_config -- json_config/json_config.sh@324 -- # [[ -n 106431 ]] 00:04:30.304 03:54:58 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:30.304 03:54:58 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:30.304 03:54:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.304 03:54:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.304 03:54:58 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:30.304 03:54:58 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:30.304 03:54:58 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:30.304 03:54:58 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:30.304 03:54:58 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:30.304 03:54:58 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:30.304 03:54:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:30.304 03:54:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.304 03:54:58 json_config -- json_config/json_config.sh@330 -- # killprocess 106431 00:04:30.304 03:54:58 json_config -- common/autotest_common.sh@954 -- # '[' -z 106431 ']' 00:04:30.304 03:54:58 json_config -- common/autotest_common.sh@958 -- # kill -0 106431 00:04:30.304 03:54:58 json_config -- common/autotest_common.sh@959 -- # uname 00:04:30.304 03:54:58 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.304 03:54:58 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106431 00:04:30.562 03:54:58 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.562 03:54:58 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.562 03:54:58 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106431' 00:04:30.562 killing process with pid 106431 00:04:30.562 03:54:58 json_config -- common/autotest_common.sh@973 -- # kill 106431 00:04:30.562 03:54:58 json_config -- common/autotest_common.sh@978 -- # wait 106431 00:04:31.939 03:55:00 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.939 03:55:00 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:31.939 03:55:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.939 03:55:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.199 03:55:00 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:32.199 03:55:00 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:32.199 INFO: Success 00:04:32.199 00:04:32.199 real 0m16.567s 00:04:32.199 user 0m18.219s 00:04:32.199 sys 0m2.606s 00:04:32.199 03:55:00 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.199 03:55:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.199 ************************************ 00:04:32.199 END TEST json_config 00:04:32.199 ************************************ 00:04:32.199 03:55:00 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:32.199 03:55:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.199 03:55:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.199 03:55:00 -- common/autotest_common.sh@10 -- # set +x 00:04:32.199 ************************************ 00:04:32.199 START TEST json_config_extra_key 00:04:32.199 ************************************ 00:04:32.199 03:55:00 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:32.199 03:55:00 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:32.199 03:55:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:32.199 03:55:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:32.199 03:55:00 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:32.199 03:55:00 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.199 03:55:00 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:32.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.199 --rc genhtml_branch_coverage=1 00:04:32.199 --rc genhtml_function_coverage=1 00:04:32.199 --rc genhtml_legend=1 00:04:32.199 --rc geninfo_all_blocks=1 00:04:32.199 --rc geninfo_unexecuted_blocks=1 00:04:32.199 00:04:32.199 ' 00:04:32.199 03:55:00 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:32.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.199 --rc genhtml_branch_coverage=1 00:04:32.199 --rc genhtml_function_coverage=1 00:04:32.199 --rc genhtml_legend=1 00:04:32.199 --rc geninfo_all_blocks=1 00:04:32.199 --rc geninfo_unexecuted_blocks=1 00:04:32.199 00:04:32.199 ' 00:04:32.199 03:55:00 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:32.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.199 --rc genhtml_branch_coverage=1 00:04:32.199 --rc genhtml_function_coverage=1 00:04:32.199 --rc genhtml_legend=1 00:04:32.199 --rc geninfo_all_blocks=1 00:04:32.199 --rc geninfo_unexecuted_blocks=1 00:04:32.199 00:04:32.199 ' 00:04:32.199 03:55:00 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:32.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.199 --rc genhtml_branch_coverage=1 00:04:32.199 --rc genhtml_function_coverage=1 00:04:32.199 --rc genhtml_legend=1 00:04:32.199 --rc geninfo_all_blocks=1 00:04:32.199 --rc geninfo_unexecuted_blocks=1 00:04:32.199 00:04:32.199 ' 00:04:32.199 03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:32.199 03:55:00 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:32.199 03:55:00 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:32.199 03:55:00 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:32.199 03:55:00 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:32.199 03:55:00 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:32.199 03:55:00 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:32.199 03:55:00 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:32.199 03:55:00 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:32.199 03:55:00 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:32.199 03:55:00 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:32.199 03:55:00 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:32.199 03:55:00 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:32.199 03:55:00 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:32.199 03:55:00 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:32.199 03:55:00 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:32.199 03:55:00 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:32.199 03:55:00 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:32.199 03:55:00 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:32.199 03:55:00 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:32.199 03:55:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.200 03:55:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.200 03:55:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.200 03:55:00 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:32.200 03:55:00 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.200 03:55:00 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:32.200 03:55:00 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:32.200 03:55:00 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:32.200 03:55:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:32.200 03:55:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:32.200 03:55:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:32.200 03:55:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:32.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:32.200 03:55:00 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:32.200 03:55:00 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:32.200 03:55:00 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:32.200 03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:32.200 03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:32.200 03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:32.200 03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:32.200 03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:32.200 03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:32.200 03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:32.200 03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:32.200 03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:32.200 03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:32.200 03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:32.200 INFO: launching applications... 00:04:32.200 03:55:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:32.200 03:55:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:32.200 03:55:00 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:32.200 03:55:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:32.200 03:55:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:32.200 03:55:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:32.200 03:55:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.200 03:55:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.200 03:55:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=107360 00:04:32.200 03:55:00 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:32.200 03:55:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:32.200 Waiting for target to run... 00:04:32.200 03:55:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 107360 /var/tmp/spdk_tgt.sock 00:04:32.200 03:55:00 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 107360 ']' 00:04:32.200 03:55:00 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:32.200 03:55:00 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.200 03:55:00 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:32.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:32.200 03:55:00 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.200 03:55:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:32.461 [2024-12-09 03:55:00.786875] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:04:32.461 [2024-12-09 03:55:00.786953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107360 ] 00:04:33.029 [2024-12-09 03:55:01.298922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.029 [2024-12-09 03:55:01.350091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.287 03:55:01 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.287 03:55:01 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:33.287 03:55:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:33.287 00:04:33.287 03:55:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:33.287 INFO: shutting down applications... 00:04:33.287 03:55:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:33.287 03:55:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:33.287 03:55:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:33.287 03:55:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 107360 ]] 00:04:33.287 03:55:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 107360 00:04:33.287 03:55:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:33.287 03:55:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.287 03:55:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 107360 00:04:33.287 03:55:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.854 03:55:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.854 03:55:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.854 03:55:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 107360 00:04:33.854 03:55:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:33.854 03:55:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:33.854 03:55:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:33.854 03:55:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:33.854 SPDK target shutdown done 00:04:33.854 03:55:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:33.854 Success 00:04:33.854 00:04:33.854 real 0m1.689s 00:04:33.854 user 0m1.521s 00:04:33.854 sys 0m0.637s 00:04:33.854 03:55:02 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.854 03:55:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:33.854 ************************************ 00:04:33.854 END TEST json_config_extra_key 00:04:33.854 ************************************ 00:04:33.854 03:55:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:33.854 03:55:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.854 03:55:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.854 03:55:02 -- common/autotest_common.sh@10 -- # set +x 00:04:33.854 ************************************ 00:04:33.854 START TEST alias_rpc 00:04:33.854 ************************************ 00:04:33.854 03:55:02 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:33.854 * Looking for test storage... 00:04:33.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:33.854 03:55:02 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:33.854 03:55:02 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:33.854 03:55:02 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:34.112 03:55:02 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:34.112 03:55:02 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:34.113 03:55:02 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:34.113 03:55:02 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:34.113 03:55:02 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:34.113 03:55:02 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:34.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.113 --rc genhtml_branch_coverage=1 00:04:34.113 --rc genhtml_function_coverage=1 00:04:34.113 --rc genhtml_legend=1 00:04:34.113 --rc geninfo_all_blocks=1 00:04:34.113 --rc geninfo_unexecuted_blocks=1 00:04:34.113 00:04:34.113 ' 00:04:34.113 03:55:02 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:34.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.113 --rc genhtml_branch_coverage=1 00:04:34.113 --rc genhtml_function_coverage=1 00:04:34.113 --rc genhtml_legend=1 00:04:34.113 --rc geninfo_all_blocks=1 00:04:34.113 --rc geninfo_unexecuted_blocks=1 00:04:34.113 00:04:34.113 ' 00:04:34.113 03:55:02 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:34.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.113 --rc genhtml_branch_coverage=1 00:04:34.113 --rc genhtml_function_coverage=1 00:04:34.113 --rc genhtml_legend=1 00:04:34.113 --rc geninfo_all_blocks=1 00:04:34.113 --rc geninfo_unexecuted_blocks=1 00:04:34.113 00:04:34.113 ' 00:04:34.113 03:55:02 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:34.113 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:34.113 --rc genhtml_branch_coverage=1 00:04:34.113 --rc genhtml_function_coverage=1 00:04:34.113 --rc genhtml_legend=1 00:04:34.113 --rc geninfo_all_blocks=1 00:04:34.113 --rc geninfo_unexecuted_blocks=1 00:04:34.113 00:04:34.113 ' 00:04:34.113 03:55:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:34.113 03:55:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=107675 00:04:34.113 03:55:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.113 03:55:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 107675 00:04:34.113 03:55:02 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 107675 ']' 00:04:34.113 03:55:02 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.113 03:55:02 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.113 03:55:02 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.113 03:55:02 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.113 03:55:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.113 [2024-12-09 03:55:02.521865] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:04:34.113 [2024-12-09 03:55:02.521964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107675 ] 00:04:34.113 [2024-12-09 03:55:02.587094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.113 [2024-12-09 03:55:02.643527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.371 03:55:02 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.371 03:55:02 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:34.371 03:55:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:34.629 03:55:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 107675 00:04:34.629 03:55:03 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 107675 ']' 00:04:34.629 03:55:03 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 107675 00:04:34.629 03:55:03 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:34.629 03:55:03 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.629 03:55:03 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107675 00:04:34.886 03:55:03 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.886 03:55:03 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.886 03:55:03 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107675' 00:04:34.886 killing process with pid 107675 00:04:34.886 03:55:03 alias_rpc -- common/autotest_common.sh@973 -- # kill 107675 00:04:34.886 03:55:03 alias_rpc -- common/autotest_common.sh@978 -- # wait 107675 00:04:35.145 00:04:35.145 real 0m1.321s 00:04:35.145 user 0m1.427s 00:04:35.145 sys 0m0.442s 00:04:35.145 03:55:03 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.145 03:55:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.145 ************************************ 00:04:35.146 END TEST alias_rpc 00:04:35.146 ************************************ 00:04:35.146 03:55:03 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:35.146 03:55:03 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:35.146 03:55:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.146 03:55:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.146 03:55:03 -- common/autotest_common.sh@10 -- # set +x 00:04:35.146 ************************************ 00:04:35.146 START TEST spdkcli_tcp 00:04:35.146 ************************************ 00:04:35.146 03:55:03 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:35.405 * Looking for test storage... 00:04:35.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:35.405 03:55:03 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:35.405 03:55:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:35.405 03:55:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:35.405 03:55:03 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.405 03:55:03 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:35.405 03:55:03 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.405 03:55:03 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:35.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.406 --rc genhtml_branch_coverage=1 00:04:35.406 --rc genhtml_function_coverage=1 00:04:35.406 --rc genhtml_legend=1 00:04:35.406 --rc geninfo_all_blocks=1 00:04:35.406 --rc geninfo_unexecuted_blocks=1 00:04:35.406 00:04:35.406 ' 00:04:35.406 03:55:03 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:35.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.406 --rc genhtml_branch_coverage=1 00:04:35.406 --rc genhtml_function_coverage=1 00:04:35.406 --rc genhtml_legend=1 00:04:35.406 --rc geninfo_all_blocks=1 00:04:35.406 --rc geninfo_unexecuted_blocks=1 00:04:35.406 00:04:35.406 ' 00:04:35.406 03:55:03 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:35.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.406 --rc genhtml_branch_coverage=1 00:04:35.406 --rc genhtml_function_coverage=1 00:04:35.406 --rc genhtml_legend=1 00:04:35.406 --rc geninfo_all_blocks=1 00:04:35.406 --rc geninfo_unexecuted_blocks=1 00:04:35.406 00:04:35.406 ' 00:04:35.406 03:55:03 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:35.406 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.406 --rc genhtml_branch_coverage=1 00:04:35.406 --rc genhtml_function_coverage=1 00:04:35.406 --rc genhtml_legend=1 00:04:35.406 --rc geninfo_all_blocks=1 00:04:35.406 --rc geninfo_unexecuted_blocks=1 00:04:35.406 00:04:35.406 ' 00:04:35.406 03:55:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:35.406 03:55:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:35.406 03:55:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:35.406 03:55:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:35.406 03:55:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:35.406 03:55:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:35.406 03:55:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:35.406 03:55:03 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.406 03:55:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:35.406 03:55:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=107868 00:04:35.406 03:55:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:35.406 03:55:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 107868 00:04:35.406 03:55:03 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 107868 ']' 00:04:35.406 03:55:03 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.406 03:55:03 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.406 03:55:03 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.406 03:55:03 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.406 03:55:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:35.406 [2024-12-09 03:55:03.909110] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:04:35.406 [2024-12-09 03:55:03.909198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107868 ] 00:04:35.406 [2024-12-09 03:55:03.975642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:35.664 [2024-12-09 03:55:04.036613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.664 [2024-12-09 03:55:04.036618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.921 03:55:04 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.922 03:55:04 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:35.922 03:55:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=107994 00:04:35.922 03:55:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:35.922 03:55:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:36.180 [ 00:04:36.180 "bdev_malloc_delete", 00:04:36.180 "bdev_malloc_create", 00:04:36.180 "bdev_null_resize", 00:04:36.180 "bdev_null_delete", 00:04:36.180 "bdev_null_create", 00:04:36.180 "bdev_nvme_cuse_unregister", 00:04:36.180 "bdev_nvme_cuse_register", 00:04:36.180 "bdev_opal_new_user", 00:04:36.180 "bdev_opal_set_lock_state", 00:04:36.180 "bdev_opal_delete", 00:04:36.180 "bdev_opal_get_info", 00:04:36.180 "bdev_opal_create", 00:04:36.180 "bdev_nvme_opal_revert", 00:04:36.180 "bdev_nvme_opal_init", 00:04:36.180 "bdev_nvme_send_cmd", 00:04:36.180 "bdev_nvme_set_keys", 00:04:36.180 "bdev_nvme_get_path_iostat", 00:04:36.180 "bdev_nvme_get_mdns_discovery_info", 00:04:36.180 "bdev_nvme_stop_mdns_discovery", 00:04:36.180 "bdev_nvme_start_mdns_discovery", 00:04:36.180 "bdev_nvme_set_multipath_policy", 00:04:36.180 "bdev_nvme_set_preferred_path", 00:04:36.180 "bdev_nvme_get_io_paths", 00:04:36.180 "bdev_nvme_remove_error_injection", 00:04:36.180 "bdev_nvme_add_error_injection", 00:04:36.180 "bdev_nvme_get_discovery_info", 00:04:36.180 "bdev_nvme_stop_discovery", 00:04:36.180 "bdev_nvme_start_discovery", 00:04:36.180 "bdev_nvme_get_controller_health_info", 00:04:36.180 "bdev_nvme_disable_controller", 00:04:36.180 "bdev_nvme_enable_controller", 00:04:36.180 "bdev_nvme_reset_controller", 00:04:36.180 "bdev_nvme_get_transport_statistics", 00:04:36.180 "bdev_nvme_apply_firmware", 00:04:36.180 "bdev_nvme_detach_controller", 00:04:36.180 "bdev_nvme_get_controllers", 00:04:36.180 "bdev_nvme_attach_controller", 00:04:36.180 "bdev_nvme_set_hotplug", 00:04:36.180 "bdev_nvme_set_options", 00:04:36.180 "bdev_passthru_delete", 00:04:36.180 "bdev_passthru_create", 00:04:36.180 "bdev_lvol_set_parent_bdev", 00:04:36.180 "bdev_lvol_set_parent", 00:04:36.180 "bdev_lvol_check_shallow_copy", 00:04:36.180 "bdev_lvol_start_shallow_copy", 00:04:36.180 "bdev_lvol_grow_lvstore", 00:04:36.180 "bdev_lvol_get_lvols", 00:04:36.180 "bdev_lvol_get_lvstores", 00:04:36.180 "bdev_lvol_delete", 00:04:36.180 "bdev_lvol_set_read_only", 00:04:36.180 "bdev_lvol_resize", 00:04:36.180 "bdev_lvol_decouple_parent", 00:04:36.180 "bdev_lvol_inflate", 00:04:36.180 "bdev_lvol_rename", 00:04:36.180 "bdev_lvol_clone_bdev", 00:04:36.180 "bdev_lvol_clone", 00:04:36.180 "bdev_lvol_snapshot", 00:04:36.180 "bdev_lvol_create", 00:04:36.180 "bdev_lvol_delete_lvstore", 00:04:36.180 "bdev_lvol_rename_lvstore", 00:04:36.180 "bdev_lvol_create_lvstore", 00:04:36.180 "bdev_raid_set_options", 00:04:36.180 "bdev_raid_remove_base_bdev", 00:04:36.180 "bdev_raid_add_base_bdev", 00:04:36.180 "bdev_raid_delete", 00:04:36.180 "bdev_raid_create", 00:04:36.180 "bdev_raid_get_bdevs", 00:04:36.180 "bdev_error_inject_error", 00:04:36.180 "bdev_error_delete", 00:04:36.180 "bdev_error_create", 00:04:36.180 "bdev_split_delete", 00:04:36.180 "bdev_split_create", 00:04:36.180 "bdev_delay_delete", 00:04:36.180 "bdev_delay_create", 00:04:36.180 "bdev_delay_update_latency", 00:04:36.180 "bdev_zone_block_delete", 00:04:36.180 "bdev_zone_block_create", 00:04:36.180 "blobfs_create", 00:04:36.180 "blobfs_detect", 00:04:36.180 "blobfs_set_cache_size", 00:04:36.180 "bdev_aio_delete", 00:04:36.180 "bdev_aio_rescan", 00:04:36.180 "bdev_aio_create", 00:04:36.180 "bdev_ftl_set_property", 00:04:36.180 "bdev_ftl_get_properties", 00:04:36.180 "bdev_ftl_get_stats", 00:04:36.180 "bdev_ftl_unmap", 00:04:36.180 "bdev_ftl_unload", 00:04:36.180 "bdev_ftl_delete", 00:04:36.180 "bdev_ftl_load", 00:04:36.180 "bdev_ftl_create", 00:04:36.180 "bdev_virtio_attach_controller", 00:04:36.180 "bdev_virtio_scsi_get_devices", 00:04:36.180 "bdev_virtio_detach_controller", 00:04:36.180 "bdev_virtio_blk_set_hotplug", 00:04:36.180 "bdev_iscsi_delete", 00:04:36.180 "bdev_iscsi_create", 00:04:36.180 "bdev_iscsi_set_options", 00:04:36.180 "accel_error_inject_error", 00:04:36.180 "ioat_scan_accel_module", 00:04:36.180 "dsa_scan_accel_module", 00:04:36.180 "iaa_scan_accel_module", 00:04:36.180 "vfu_virtio_create_fs_endpoint", 00:04:36.180 "vfu_virtio_create_scsi_endpoint", 00:04:36.180 "vfu_virtio_scsi_remove_target", 00:04:36.180 "vfu_virtio_scsi_add_target", 00:04:36.180 "vfu_virtio_create_blk_endpoint", 00:04:36.180 "vfu_virtio_delete_endpoint", 00:04:36.180 "keyring_file_remove_key", 00:04:36.180 "keyring_file_add_key", 00:04:36.180 "keyring_linux_set_options", 00:04:36.180 "fsdev_aio_delete", 00:04:36.180 "fsdev_aio_create", 00:04:36.180 "iscsi_get_histogram", 00:04:36.180 "iscsi_enable_histogram", 00:04:36.180 "iscsi_set_options", 00:04:36.180 "iscsi_get_auth_groups", 00:04:36.180 "iscsi_auth_group_remove_secret", 00:04:36.180 "iscsi_auth_group_add_secret", 00:04:36.180 "iscsi_delete_auth_group", 00:04:36.180 "iscsi_create_auth_group", 00:04:36.180 "iscsi_set_discovery_auth", 00:04:36.180 "iscsi_get_options", 00:04:36.180 "iscsi_target_node_request_logout", 00:04:36.180 "iscsi_target_node_set_redirect", 00:04:36.180 "iscsi_target_node_set_auth", 00:04:36.180 "iscsi_target_node_add_lun", 00:04:36.180 "iscsi_get_stats", 00:04:36.180 "iscsi_get_connections", 00:04:36.180 "iscsi_portal_group_set_auth", 00:04:36.180 "iscsi_start_portal_group", 00:04:36.180 "iscsi_delete_portal_group", 00:04:36.180 "iscsi_create_portal_group", 00:04:36.180 "iscsi_get_portal_groups", 00:04:36.180 "iscsi_delete_target_node", 00:04:36.180 "iscsi_target_node_remove_pg_ig_maps", 00:04:36.180 "iscsi_target_node_add_pg_ig_maps", 00:04:36.180 "iscsi_create_target_node", 00:04:36.180 "iscsi_get_target_nodes", 00:04:36.180 "iscsi_delete_initiator_group", 00:04:36.180 "iscsi_initiator_group_remove_initiators", 00:04:36.180 "iscsi_initiator_group_add_initiators", 00:04:36.180 "iscsi_create_initiator_group", 00:04:36.180 "iscsi_get_initiator_groups", 00:04:36.180 "nvmf_set_crdt", 00:04:36.180 "nvmf_set_config", 00:04:36.180 "nvmf_set_max_subsystems", 00:04:36.180 "nvmf_stop_mdns_prr", 00:04:36.180 "nvmf_publish_mdns_prr", 00:04:36.180 "nvmf_subsystem_get_listeners", 00:04:36.180 "nvmf_subsystem_get_qpairs", 00:04:36.180 "nvmf_subsystem_get_controllers", 00:04:36.180 "nvmf_get_stats", 00:04:36.180 "nvmf_get_transports", 00:04:36.180 "nvmf_create_transport", 00:04:36.180 "nvmf_get_targets", 00:04:36.180 "nvmf_delete_target", 00:04:36.180 "nvmf_create_target", 00:04:36.180 "nvmf_subsystem_allow_any_host", 00:04:36.180 "nvmf_subsystem_set_keys", 00:04:36.180 "nvmf_subsystem_remove_host", 00:04:36.180 "nvmf_subsystem_add_host", 00:04:36.180 "nvmf_ns_remove_host", 00:04:36.180 "nvmf_ns_add_host", 00:04:36.180 "nvmf_subsystem_remove_ns", 00:04:36.180 "nvmf_subsystem_set_ns_ana_group", 00:04:36.180 "nvmf_subsystem_add_ns", 00:04:36.180 "nvmf_subsystem_listener_set_ana_state", 00:04:36.180 "nvmf_discovery_get_referrals", 00:04:36.180 "nvmf_discovery_remove_referral", 00:04:36.180 "nvmf_discovery_add_referral", 00:04:36.180 "nvmf_subsystem_remove_listener", 00:04:36.180 "nvmf_subsystem_add_listener", 00:04:36.180 "nvmf_delete_subsystem", 00:04:36.180 "nvmf_create_subsystem", 00:04:36.180 "nvmf_get_subsystems", 00:04:36.180 "env_dpdk_get_mem_stats", 00:04:36.180 "nbd_get_disks", 00:04:36.180 "nbd_stop_disk", 00:04:36.180 "nbd_start_disk", 00:04:36.180 "ublk_recover_disk", 00:04:36.180 "ublk_get_disks", 00:04:36.180 "ublk_stop_disk", 00:04:36.180 "ublk_start_disk", 00:04:36.180 "ublk_destroy_target", 00:04:36.180 "ublk_create_target", 00:04:36.180 "virtio_blk_create_transport", 00:04:36.180 "virtio_blk_get_transports", 00:04:36.180 "vhost_controller_set_coalescing", 00:04:36.180 "vhost_get_controllers", 00:04:36.180 "vhost_delete_controller", 00:04:36.180 "vhost_create_blk_controller", 00:04:36.180 "vhost_scsi_controller_remove_target", 00:04:36.180 "vhost_scsi_controller_add_target", 00:04:36.180 "vhost_start_scsi_controller", 00:04:36.180 "vhost_create_scsi_controller", 00:04:36.180 "thread_set_cpumask", 00:04:36.180 "scheduler_set_options", 00:04:36.180 "framework_get_governor", 00:04:36.180 "framework_get_scheduler", 00:04:36.180 "framework_set_scheduler", 00:04:36.180 "framework_get_reactors", 00:04:36.180 "thread_get_io_channels", 00:04:36.180 "thread_get_pollers", 00:04:36.180 "thread_get_stats", 00:04:36.180 "framework_monitor_context_switch", 00:04:36.180 "spdk_kill_instance", 00:04:36.180 "log_enable_timestamps", 00:04:36.180 "log_get_flags", 00:04:36.180 "log_clear_flag", 00:04:36.180 "log_set_flag", 00:04:36.180 "log_get_level", 00:04:36.180 "log_set_level", 00:04:36.180 "log_get_print_level", 00:04:36.180 "log_set_print_level", 00:04:36.180 "framework_enable_cpumask_locks", 00:04:36.180 "framework_disable_cpumask_locks", 00:04:36.180 "framework_wait_init", 00:04:36.180 "framework_start_init", 00:04:36.180 "scsi_get_devices", 00:04:36.180 "bdev_get_histogram", 00:04:36.180 "bdev_enable_histogram", 00:04:36.180 "bdev_set_qos_limit", 00:04:36.180 "bdev_set_qd_sampling_period", 00:04:36.180 "bdev_get_bdevs", 00:04:36.180 "bdev_reset_iostat", 00:04:36.180 "bdev_get_iostat", 00:04:36.180 "bdev_examine", 00:04:36.180 "bdev_wait_for_examine", 00:04:36.180 "bdev_set_options", 00:04:36.180 "accel_get_stats", 00:04:36.180 "accel_set_options", 00:04:36.180 "accel_set_driver", 00:04:36.180 "accel_crypto_key_destroy", 00:04:36.180 "accel_crypto_keys_get", 00:04:36.180 "accel_crypto_key_create", 00:04:36.180 "accel_assign_opc", 00:04:36.180 "accel_get_module_info", 00:04:36.180 "accel_get_opc_assignments", 00:04:36.180 "vmd_rescan", 00:04:36.180 "vmd_remove_device", 00:04:36.180 "vmd_enable", 00:04:36.180 "sock_get_default_impl", 00:04:36.180 "sock_set_default_impl", 00:04:36.180 "sock_impl_set_options", 00:04:36.180 "sock_impl_get_options", 00:04:36.180 "iobuf_get_stats", 00:04:36.180 "iobuf_set_options", 00:04:36.180 "keyring_get_keys", 00:04:36.180 "vfu_tgt_set_base_path", 00:04:36.180 "framework_get_pci_devices", 00:04:36.180 "framework_get_config", 00:04:36.180 "framework_get_subsystems", 00:04:36.180 "fsdev_set_opts", 00:04:36.180 "fsdev_get_opts", 00:04:36.180 "trace_get_info", 00:04:36.180 "trace_get_tpoint_group_mask", 00:04:36.180 "trace_disable_tpoint_group", 00:04:36.180 "trace_enable_tpoint_group", 00:04:36.180 "trace_clear_tpoint_mask", 00:04:36.180 "trace_set_tpoint_mask", 00:04:36.180 "notify_get_notifications", 00:04:36.180 "notify_get_types", 00:04:36.180 "spdk_get_version", 00:04:36.180 "rpc_get_methods" 00:04:36.180 ] 00:04:36.180 03:55:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:36.180 03:55:04 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.180 03:55:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.180 03:55:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:36.180 03:55:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 107868 00:04:36.180 03:55:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 107868 ']' 00:04:36.180 03:55:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 107868 00:04:36.180 03:55:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:36.180 03:55:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.180 03:55:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107868 00:04:36.180 03:55:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.180 03:55:04 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.180 03:55:04 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107868' 00:04:36.180 killing process with pid 107868 00:04:36.180 03:55:04 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 107868 00:04:36.180 03:55:04 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 107868 00:04:36.745 00:04:36.745 real 0m1.349s 00:04:36.745 user 0m2.399s 00:04:36.745 sys 0m0.476s 00:04:36.745 03:55:05 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.745 03:55:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.745 ************************************ 00:04:36.745 END TEST spdkcli_tcp 00:04:36.745 ************************************ 00:04:36.745 03:55:05 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:36.745 03:55:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.745 03:55:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.745 03:55:05 -- common/autotest_common.sh@10 -- # set +x 00:04:36.745 ************************************ 00:04:36.745 START TEST dpdk_mem_utility 00:04:36.745 ************************************ 00:04:36.745 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:36.745 * Looking for test storage... 00:04:36.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:36.745 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:36.745 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:36.745 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:36.745 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:36.745 03:55:05 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.745 03:55:05 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.745 03:55:05 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.745 03:55:05 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.745 03:55:05 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.745 03:55:05 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.745 03:55:05 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.745 03:55:05 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.745 03:55:05 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.745 03:55:05 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.745 03:55:05 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.745 03:55:05 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:36.746 03:55:05 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:36.746 03:55:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.746 03:55:05 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.746 03:55:05 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:36.746 03:55:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:36.746 03:55:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.746 03:55:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:36.746 03:55:05 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.746 03:55:05 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:36.746 03:55:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:36.746 03:55:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.746 03:55:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:36.746 03:55:05 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.746 03:55:05 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.746 03:55:05 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.746 03:55:05 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:36.746 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.746 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:36.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.746 --rc genhtml_branch_coverage=1 00:04:36.746 --rc genhtml_function_coverage=1 00:04:36.746 --rc genhtml_legend=1 00:04:36.746 --rc geninfo_all_blocks=1 00:04:36.746 --rc geninfo_unexecuted_blocks=1 00:04:36.746 00:04:36.746 ' 00:04:36.746 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:36.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.746 --rc genhtml_branch_coverage=1 00:04:36.746 --rc genhtml_function_coverage=1 00:04:36.746 --rc genhtml_legend=1 00:04:36.746 --rc geninfo_all_blocks=1 00:04:36.746 --rc geninfo_unexecuted_blocks=1 00:04:36.746 00:04:36.746 ' 00:04:36.746 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:36.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.746 --rc genhtml_branch_coverage=1 00:04:36.746 --rc genhtml_function_coverage=1 00:04:36.746 --rc genhtml_legend=1 00:04:36.746 --rc geninfo_all_blocks=1 00:04:36.746 --rc geninfo_unexecuted_blocks=1 00:04:36.746 00:04:36.746 ' 00:04:36.746 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:36.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.746 --rc genhtml_branch_coverage=1 00:04:36.746 --rc genhtml_function_coverage=1 00:04:36.746 --rc genhtml_legend=1 00:04:36.746 --rc geninfo_all_blocks=1 00:04:36.746 --rc geninfo_unexecuted_blocks=1 00:04:36.746 00:04:36.746 ' 00:04:36.746 03:55:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:36.746 03:55:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=108105 00:04:36.746 03:55:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.746 03:55:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 108105 00:04:36.746 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 108105 ']' 00:04:36.746 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.746 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.746 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.746 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.746 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:36.746 [2024-12-09 03:55:05.293951] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:04:36.746 [2024-12-09 03:55:05.294062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108105 ] 00:04:37.004 [2024-12-09 03:55:05.361357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.004 [2024-12-09 03:55:05.418607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.262 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.262 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:37.262 03:55:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:37.262 03:55:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:37.262 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:37.262 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:37.262 { 00:04:37.262 "filename": "/tmp/spdk_mem_dump.txt" 00:04:37.262 } 00:04:37.262 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:37.262 03:55:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:37.262 DPDK memory size 818.000000 MiB in 1 heap(s) 00:04:37.262 1 heaps totaling size 818.000000 MiB 00:04:37.262 size: 818.000000 MiB heap id: 0 00:04:37.262 end heaps---------- 00:04:37.262 9 mempools totaling size 603.782043 MiB 00:04:37.262 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:37.262 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:37.262 size: 100.555481 MiB name: bdev_io_108105 00:04:37.262 size: 50.003479 MiB name: msgpool_108105 00:04:37.262 size: 36.509338 MiB name: fsdev_io_108105 00:04:37.262 size: 21.763794 MiB name: PDU_Pool 00:04:37.262 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:37.262 size: 4.133484 MiB name: evtpool_108105 00:04:37.262 size: 0.026123 MiB name: Session_Pool 00:04:37.262 end mempools------- 00:04:37.262 6 memzones totaling size 4.142822 MiB 00:04:37.262 size: 1.000366 MiB name: RG_ring_0_108105 00:04:37.262 size: 1.000366 MiB name: RG_ring_1_108105 00:04:37.262 size: 1.000366 MiB name: RG_ring_4_108105 00:04:37.262 size: 1.000366 MiB name: RG_ring_5_108105 00:04:37.262 size: 0.125366 MiB name: RG_ring_2_108105 00:04:37.262 size: 0.015991 MiB name: RG_ring_3_108105 00:04:37.262 end memzones------- 00:04:37.262 03:55:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:37.262 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:37.262 list of free elements. size: 10.852478 MiB 00:04:37.262 element at address: 0x200019200000 with size: 0.999878 MiB 00:04:37.262 element at address: 0x200019400000 with size: 0.999878 MiB 00:04:37.262 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:37.262 element at address: 0x200032000000 with size: 0.994446 MiB 00:04:37.262 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:37.262 element at address: 0x200012c00000 with size: 0.944275 MiB 00:04:37.262 element at address: 0x200019600000 with size: 0.936584 MiB 00:04:37.262 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:37.262 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:04:37.262 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:37.262 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:37.262 element at address: 0x200019800000 with size: 0.485657 MiB 00:04:37.262 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:37.262 element at address: 0x200028200000 with size: 0.410034 MiB 00:04:37.262 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:37.262 list of standard malloc elements. size: 199.218628 MiB 00:04:37.262 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:37.262 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:37.262 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:37.262 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:04:37.262 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:04:37.262 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:37.262 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:04:37.262 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:37.262 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:04:37.262 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:37.262 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:37.262 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:37.262 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:37.262 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:37.262 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:37.262 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:37.262 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:37.262 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:37.262 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:37.262 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:37.262 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:37.262 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:37.262 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:37.262 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:37.262 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:37.262 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:37.262 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:37.262 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:37.262 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:37.262 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:37.262 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:37.262 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:37.262 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:37.262 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:04:37.262 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:04:37.262 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:04:37.262 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:04:37.262 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:04:37.262 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:04:37.262 element at address: 0x200028268f80 with size: 0.000183 MiB 00:04:37.262 element at address: 0x200028269040 with size: 0.000183 MiB 00:04:37.262 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:04:37.262 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:04:37.262 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:04:37.262 list of memzone associated elements. size: 607.928894 MiB 00:04:37.262 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:04:37.262 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:37.262 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:04:37.262 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:37.262 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:04:37.262 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_108105_0 00:04:37.262 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:37.262 associated memzone info: size: 48.002930 MiB name: MP_msgpool_108105_0 00:04:37.262 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:37.262 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_108105_0 00:04:37.262 element at address: 0x2000199be940 with size: 20.255554 MiB 00:04:37.262 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:37.262 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:04:37.262 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:37.262 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:37.262 associated memzone info: size: 3.000122 MiB name: MP_evtpool_108105_0 00:04:37.262 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:37.262 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_108105 00:04:37.262 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:37.262 associated memzone info: size: 1.007996 MiB name: MP_evtpool_108105 00:04:37.262 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:37.262 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:37.262 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:04:37.262 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:37.262 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:37.262 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:37.263 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:37.263 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:37.263 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:37.263 associated memzone info: size: 1.000366 MiB name: RG_ring_0_108105 00:04:37.263 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:37.263 associated memzone info: size: 1.000366 MiB name: RG_ring_1_108105 00:04:37.263 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:04:37.263 associated memzone info: size: 1.000366 MiB name: RG_ring_4_108105 00:04:37.263 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:04:37.263 associated memzone info: size: 1.000366 MiB name: RG_ring_5_108105 00:04:37.263 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:37.263 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_108105 00:04:37.263 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:37.263 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_108105 00:04:37.263 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:37.263 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:37.263 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:37.263 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:37.263 element at address: 0x20001987c540 with size: 0.250488 MiB 00:04:37.263 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:37.263 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:37.263 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_108105 00:04:37.263 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:37.263 associated memzone info: size: 0.125366 MiB name: RG_ring_2_108105 00:04:37.263 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:37.263 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:37.263 element at address: 0x200028269100 with size: 0.023743 MiB 00:04:37.263 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:37.263 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:37.263 associated memzone info: size: 0.015991 MiB name: RG_ring_3_108105 00:04:37.263 element at address: 0x20002826f240 with size: 0.002441 MiB 00:04:37.263 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:37.263 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:37.263 associated memzone info: size: 0.000183 MiB name: MP_msgpool_108105 00:04:37.263 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:37.263 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_108105 00:04:37.263 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:37.263 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_108105 00:04:37.263 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:04:37.263 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:37.263 03:55:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:37.263 03:55:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 108105 00:04:37.263 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 108105 ']' 00:04:37.263 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 108105 00:04:37.263 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:37.263 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:37.263 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108105 00:04:37.263 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:37.263 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:37.263 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108105' 00:04:37.263 killing process with pid 108105 00:04:37.263 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 108105 00:04:37.263 03:55:05 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 108105 00:04:37.827 00:04:37.827 real 0m1.152s 00:04:37.827 user 0m1.125s 00:04:37.827 sys 0m0.425s 00:04:37.827 03:55:06 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:37.827 03:55:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:37.827 ************************************ 00:04:37.827 END TEST dpdk_mem_utility 00:04:37.827 ************************************ 00:04:37.828 03:55:06 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:37.828 03:55:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:37.828 03:55:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.828 03:55:06 -- common/autotest_common.sh@10 -- # set +x 00:04:37.828 ************************************ 00:04:37.828 START TEST event 00:04:37.828 ************************************ 00:04:37.828 03:55:06 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:37.828 * Looking for test storage... 00:04:37.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:37.828 03:55:06 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:37.828 03:55:06 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:37.828 03:55:06 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:38.086 03:55:06 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:38.086 03:55:06 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.086 03:55:06 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.086 03:55:06 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.086 03:55:06 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.086 03:55:06 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.086 03:55:06 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.086 03:55:06 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.086 03:55:06 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.086 03:55:06 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.086 03:55:06 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.086 03:55:06 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.086 03:55:06 event -- scripts/common.sh@344 -- # case "$op" in 00:04:38.086 03:55:06 event -- scripts/common.sh@345 -- # : 1 00:04:38.086 03:55:06 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.086 03:55:06 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.086 03:55:06 event -- scripts/common.sh@365 -- # decimal 1 00:04:38.086 03:55:06 event -- scripts/common.sh@353 -- # local d=1 00:04:38.086 03:55:06 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.086 03:55:06 event -- scripts/common.sh@355 -- # echo 1 00:04:38.086 03:55:06 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.086 03:55:06 event -- scripts/common.sh@366 -- # decimal 2 00:04:38.086 03:55:06 event -- scripts/common.sh@353 -- # local d=2 00:04:38.086 03:55:06 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.086 03:55:06 event -- scripts/common.sh@355 -- # echo 2 00:04:38.086 03:55:06 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.086 03:55:06 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.086 03:55:06 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.086 03:55:06 event -- scripts/common.sh@368 -- # return 0 00:04:38.086 03:55:06 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.086 03:55:06 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:38.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.086 --rc genhtml_branch_coverage=1 00:04:38.086 --rc genhtml_function_coverage=1 00:04:38.086 --rc genhtml_legend=1 00:04:38.086 --rc geninfo_all_blocks=1 00:04:38.086 --rc geninfo_unexecuted_blocks=1 00:04:38.086 00:04:38.086 ' 00:04:38.086 03:55:06 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:38.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.086 --rc genhtml_branch_coverage=1 00:04:38.086 --rc genhtml_function_coverage=1 00:04:38.086 --rc genhtml_legend=1 00:04:38.086 --rc geninfo_all_blocks=1 00:04:38.086 --rc geninfo_unexecuted_blocks=1 00:04:38.086 00:04:38.086 ' 00:04:38.086 03:55:06 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:38.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.086 --rc genhtml_branch_coverage=1 00:04:38.086 --rc genhtml_function_coverage=1 00:04:38.086 --rc genhtml_legend=1 00:04:38.086 --rc geninfo_all_blocks=1 00:04:38.086 --rc geninfo_unexecuted_blocks=1 00:04:38.086 00:04:38.086 ' 00:04:38.086 03:55:06 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:38.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.086 --rc genhtml_branch_coverage=1 00:04:38.086 --rc genhtml_function_coverage=1 00:04:38.086 --rc genhtml_legend=1 00:04:38.086 --rc geninfo_all_blocks=1 00:04:38.086 --rc geninfo_unexecuted_blocks=1 00:04:38.086 00:04:38.086 ' 00:04:38.086 03:55:06 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:38.086 03:55:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:38.087 03:55:06 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:38.087 03:55:06 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:38.087 03:55:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.087 03:55:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.087 ************************************ 00:04:38.087 START TEST event_perf 00:04:38.087 ************************************ 00:04:38.087 03:55:06 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:38.087 Running I/O for 1 seconds...[2024-12-09 03:55:06.471239] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:04:38.087 [2024-12-09 03:55:06.471321] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108400 ] 00:04:38.087 [2024-12-09 03:55:06.537156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:38.087 [2024-12-09 03:55:06.601035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.087 [2024-12-09 03:55:06.601098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:38.087 [2024-12-09 03:55:06.601168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:38.087 [2024-12-09 03:55:06.601171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.459 Running I/O for 1 seconds... 00:04:39.459 lcore 0: 228940 00:04:39.459 lcore 1: 228940 00:04:39.459 lcore 2: 228939 00:04:39.459 lcore 3: 228940 00:04:39.459 done. 00:04:39.459 00:04:39.459 real 0m1.208s 00:04:39.459 user 0m4.133s 00:04:39.459 sys 0m0.070s 00:04:39.459 03:55:07 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.459 03:55:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:39.459 ************************************ 00:04:39.459 END TEST event_perf 00:04:39.459 ************************************ 00:04:39.459 03:55:07 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:39.459 03:55:07 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:39.459 03:55:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.459 03:55:07 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.459 ************************************ 00:04:39.459 START TEST event_reactor 00:04:39.459 ************************************ 00:04:39.459 03:55:07 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:39.459 [2024-12-09 03:55:07.732549] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:04:39.459 [2024-12-09 03:55:07.732623] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108557 ] 00:04:39.459 [2024-12-09 03:55:07.799247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.459 [2024-12-09 03:55:07.853922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.393 test_start 00:04:40.393 oneshot 00:04:40.393 tick 100 00:04:40.393 tick 100 00:04:40.393 tick 250 00:04:40.393 tick 100 00:04:40.393 tick 100 00:04:40.393 tick 100 00:04:40.393 tick 250 00:04:40.393 tick 500 00:04:40.393 tick 100 00:04:40.393 tick 100 00:04:40.393 tick 250 00:04:40.393 tick 100 00:04:40.393 tick 100 00:04:40.393 test_end 00:04:40.393 00:04:40.393 real 0m1.199s 00:04:40.393 user 0m1.137s 00:04:40.393 sys 0m0.058s 00:04:40.393 03:55:08 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:40.393 03:55:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:40.393 ************************************ 00:04:40.393 END TEST event_reactor 00:04:40.393 ************************************ 00:04:40.393 03:55:08 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:40.393 03:55:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:40.393 03:55:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:40.393 03:55:08 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.652 ************************************ 00:04:40.652 START TEST event_reactor_perf 00:04:40.652 ************************************ 00:04:40.652 03:55:08 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:40.652 [2024-12-09 03:55:08.984746] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:04:40.652 [2024-12-09 03:55:08.984813] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108709 ] 00:04:40.652 [2024-12-09 03:55:09.050391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.652 [2024-12-09 03:55:09.103752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.026 test_start 00:04:42.026 test_end 00:04:42.026 Performance: 444393 events per second 00:04:42.026 00:04:42.026 real 0m1.199s 00:04:42.026 user 0m1.121s 00:04:42.026 sys 0m0.072s 00:04:42.026 03:55:10 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:42.026 03:55:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:42.026 ************************************ 00:04:42.026 END TEST event_reactor_perf 00:04:42.026 ************************************ 00:04:42.026 03:55:10 event -- event/event.sh@49 -- # uname -s 00:04:42.026 03:55:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:42.026 03:55:10 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:42.026 03:55:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.026 03:55:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.026 03:55:10 event -- common/autotest_common.sh@10 -- # set +x 00:04:42.026 ************************************ 00:04:42.026 START TEST event_scheduler 00:04:42.026 ************************************ 00:04:42.026 03:55:10 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:42.026 * Looking for test storage... 00:04:42.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:42.026 03:55:10 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:42.026 03:55:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:42.026 03:55:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:42.026 03:55:10 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.027 03:55:10 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:42.027 03:55:10 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.027 03:55:10 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:42.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.027 --rc genhtml_branch_coverage=1 00:04:42.027 --rc genhtml_function_coverage=1 00:04:42.027 --rc genhtml_legend=1 00:04:42.027 --rc geninfo_all_blocks=1 00:04:42.027 --rc geninfo_unexecuted_blocks=1 00:04:42.027 00:04:42.027 ' 00:04:42.027 03:55:10 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:42.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.027 --rc genhtml_branch_coverage=1 00:04:42.027 --rc genhtml_function_coverage=1 00:04:42.027 --rc genhtml_legend=1 00:04:42.027 --rc geninfo_all_blocks=1 00:04:42.027 --rc geninfo_unexecuted_blocks=1 00:04:42.027 00:04:42.027 ' 00:04:42.027 03:55:10 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:42.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.027 --rc genhtml_branch_coverage=1 00:04:42.027 --rc genhtml_function_coverage=1 00:04:42.027 --rc genhtml_legend=1 00:04:42.027 --rc geninfo_all_blocks=1 00:04:42.027 --rc geninfo_unexecuted_blocks=1 00:04:42.027 00:04:42.027 ' 00:04:42.027 03:55:10 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:42.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.027 --rc genhtml_branch_coverage=1 00:04:42.027 --rc genhtml_function_coverage=1 00:04:42.027 --rc genhtml_legend=1 00:04:42.027 --rc geninfo_all_blocks=1 00:04:42.027 --rc geninfo_unexecuted_blocks=1 00:04:42.027 00:04:42.027 ' 00:04:42.027 03:55:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:42.027 03:55:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=108901 00:04:42.027 03:55:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:42.027 03:55:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.027 03:55:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 108901 00:04:42.027 03:55:10 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 108901 ']' 00:04:42.027 03:55:10 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.027 03:55:10 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:42.027 03:55:10 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.027 03:55:10 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:42.027 03:55:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.027 [2024-12-09 03:55:10.422841] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:04:42.027 [2024-12-09 03:55:10.422941] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108901 ] 00:04:42.027 [2024-12-09 03:55:10.492899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:42.027 [2024-12-09 03:55:10.556229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.027 [2024-12-09 03:55:10.556295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.027 [2024-12-09 03:55:10.556357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:42.027 [2024-12-09 03:55:10.556361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:42.286 03:55:10 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.286 03:55:10 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:42.286 03:55:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:42.286 03:55:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.286 03:55:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.286 [2024-12-09 03:55:10.673355] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:42.286 [2024-12-09 03:55:10.673381] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:42.286 [2024-12-09 03:55:10.673398] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:42.286 [2024-12-09 03:55:10.673409] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:42.286 [2024-12-09 03:55:10.673419] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:42.286 03:55:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.286 03:55:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:42.286 03:55:10 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.286 03:55:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.286 [2024-12-09 03:55:10.771111] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:42.286 03:55:10 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.286 03:55:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:42.286 03:55:10 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:42.286 03:55:10 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:42.287 03:55:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.287 ************************************ 00:04:42.287 START TEST scheduler_create_thread 00:04:42.287 ************************************ 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.287 2 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.287 3 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.287 4 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.287 5 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.287 6 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.287 7 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.287 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.546 8 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.546 9 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.546 10 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.546 03:55:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.113 03:55:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:43.113 00:04:43.113 real 0m0.592s 00:04:43.113 user 0m0.010s 00:04:43.113 sys 0m0.005s 00:04:43.113 03:55:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.113 03:55:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.113 ************************************ 00:04:43.113 END TEST scheduler_create_thread 00:04:43.113 ************************************ 00:04:43.113 03:55:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:43.113 03:55:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 108901 00:04:43.113 03:55:11 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 108901 ']' 00:04:43.113 03:55:11 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 108901 00:04:43.113 03:55:11 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:43.113 03:55:11 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:43.113 03:55:11 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108901 00:04:43.113 03:55:11 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:43.113 03:55:11 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:43.113 03:55:11 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108901' 00:04:43.113 killing process with pid 108901 00:04:43.113 03:55:11 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 108901 00:04:43.113 03:55:11 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 108901 00:04:43.371 [2024-12-09 03:55:11.875327] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:43.631 00:04:43.631 real 0m1.865s 00:04:43.631 user 0m2.580s 00:04:43.631 sys 0m0.367s 00:04:43.631 03:55:12 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.631 03:55:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:43.631 ************************************ 00:04:43.631 END TEST event_scheduler 00:04:43.631 ************************************ 00:04:43.631 03:55:12 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:43.631 03:55:12 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:43.631 03:55:12 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.631 03:55:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.631 03:55:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:43.631 ************************************ 00:04:43.631 START TEST app_repeat 00:04:43.631 ************************************ 00:04:43.631 03:55:12 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:43.631 03:55:12 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.631 03:55:12 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.631 03:55:12 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:43.631 03:55:12 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:43.631 03:55:12 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:43.631 03:55:12 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:43.631 03:55:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:43.631 03:55:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=109211 00:04:43.631 03:55:12 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:43.631 03:55:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.631 03:55:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 109211' 00:04:43.631 Process app_repeat pid: 109211 00:04:43.631 03:55:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:43.631 03:55:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:43.631 spdk_app_start Round 0 00:04:43.631 03:55:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 109211 /var/tmp/spdk-nbd.sock 00:04:43.631 03:55:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 109211 ']' 00:04:43.631 03:55:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:43.631 03:55:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.631 03:55:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:43.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:43.631 03:55:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.631 03:55:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:43.631 [2024-12-09 03:55:12.176197] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:04:43.631 [2024-12-09 03:55:12.176268] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109211 ] 00:04:43.890 [2024-12-09 03:55:12.242002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:43.890 [2024-12-09 03:55:12.297315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.890 [2024-12-09 03:55:12.297319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.890 03:55:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.890 03:55:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:43.890 03:55:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.149 Malloc0 00:04:44.149 03:55:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:44.408 Malloc1 00:04:44.666 03:55:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.666 03:55:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.666 03:55:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.666 03:55:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:44.666 03:55:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.666 03:55:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:44.666 03:55:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:44.666 03:55:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.666 03:55:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:44.666 03:55:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:44.666 03:55:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:44.666 03:55:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:44.666 03:55:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:44.666 03:55:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:44.666 03:55:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.666 03:55:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:44.924 /dev/nbd0 00:04:44.924 03:55:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:44.924 03:55:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:44.924 03:55:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:44.924 03:55:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:44.924 03:55:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:44.924 03:55:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:44.924 03:55:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:44.924 03:55:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:44.924 03:55:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:44.924 03:55:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:44.924 03:55:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:44.924 1+0 records in 00:04:44.924 1+0 records out 00:04:44.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000176952 s, 23.1 MB/s 00:04:44.924 03:55:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.924 03:55:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:44.924 03:55:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:44.924 03:55:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:44.924 03:55:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:44.924 03:55:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:44.924 03:55:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:44.924 03:55:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:45.183 /dev/nbd1 00:04:45.183 03:55:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:45.183 03:55:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:45.183 03:55:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:45.183 03:55:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:45.183 03:55:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:45.183 03:55:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:45.183 03:55:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:45.183 03:55:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:45.183 03:55:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:45.183 03:55:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:45.183 03:55:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:45.183 1+0 records in 00:04:45.183 1+0 records out 00:04:45.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219963 s, 18.6 MB/s 00:04:45.183 03:55:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.183 03:55:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:45.183 03:55:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:45.183 03:55:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:45.183 03:55:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:45.183 03:55:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:45.183 03:55:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:45.183 03:55:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:45.183 03:55:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.183 03:55:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:45.441 03:55:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:45.441 { 00:04:45.441 "nbd_device": "/dev/nbd0", 00:04:45.441 "bdev_name": "Malloc0" 00:04:45.441 }, 00:04:45.441 { 00:04:45.441 "nbd_device": "/dev/nbd1", 00:04:45.441 "bdev_name": "Malloc1" 00:04:45.441 } 00:04:45.441 ]' 00:04:45.441 03:55:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:45.441 { 00:04:45.441 "nbd_device": "/dev/nbd0", 00:04:45.441 "bdev_name": "Malloc0" 00:04:45.441 }, 00:04:45.441 { 00:04:45.441 "nbd_device": "/dev/nbd1", 00:04:45.441 "bdev_name": "Malloc1" 00:04:45.441 } 00:04:45.441 ]' 00:04:45.441 03:55:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:45.441 03:55:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:45.441 /dev/nbd1' 00:04:45.441 03:55:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:45.441 /dev/nbd1' 00:04:45.441 03:55:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:45.441 03:55:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:45.441 03:55:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:45.441 03:55:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:45.441 03:55:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:45.441 03:55:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:45.441 03:55:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.441 03:55:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.441 03:55:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:45.441 03:55:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.441 03:55:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:45.441 03:55:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:45.441 256+0 records in 00:04:45.441 256+0 records out 00:04:45.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00504387 s, 208 MB/s 00:04:45.441 03:55:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.441 03:55:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:45.441 256+0 records in 00:04:45.441 256+0 records out 00:04:45.441 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212683 s, 49.3 MB/s 00:04:45.441 03:55:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:45.441 03:55:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:45.700 256+0 records in 00:04:45.700 256+0 records out 00:04:45.700 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235782 s, 44.5 MB/s 00:04:45.700 03:55:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:45.700 03:55:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.700 03:55:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:45.700 03:55:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:45.700 03:55:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.700 03:55:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:45.700 03:55:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:45.700 03:55:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.700 03:55:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:45.700 03:55:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:45.700 03:55:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:45.700 03:55:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:45.700 03:55:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:45.700 03:55:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:45.700 03:55:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:45.700 03:55:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:45.700 03:55:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:45.700 03:55:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.700 03:55:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:45.958 03:55:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:45.958 03:55:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:45.958 03:55:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:45.958 03:55:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:45.958 03:55:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:45.958 03:55:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:45.958 03:55:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:45.958 03:55:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:45.958 03:55:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:45.958 03:55:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:46.216 03:55:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:46.216 03:55:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:46.216 03:55:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:46.216 03:55:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:46.216 03:55:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:46.216 03:55:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:46.216 03:55:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:46.216 03:55:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:46.216 03:55:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:46.216 03:55:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:46.216 03:55:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:46.474 03:55:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:46.474 03:55:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:46.474 03:55:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:46.474 03:55:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:46.474 03:55:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:46.474 03:55:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:46.474 03:55:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:46.474 03:55:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:46.474 03:55:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:46.474 03:55:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:46.474 03:55:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:46.474 03:55:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:46.474 03:55:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:46.731 03:55:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:46.987 [2024-12-09 03:55:15.443309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:46.987 [2024-12-09 03:55:15.497801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.987 [2024-12-09 03:55:15.497801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:46.987 [2024-12-09 03:55:15.554314] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:46.987 [2024-12-09 03:55:15.554396] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:50.269 03:55:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:50.269 03:55:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:50.269 spdk_app_start Round 1 00:04:50.269 03:55:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 109211 /var/tmp/spdk-nbd.sock 00:04:50.269 03:55:18 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 109211 ']' 00:04:50.269 03:55:18 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:50.269 03:55:18 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.269 03:55:18 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:50.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:50.269 03:55:18 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.269 03:55:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:50.269 03:55:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.269 03:55:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:50.269 03:55:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.269 Malloc0 00:04:50.269 03:55:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:50.529 Malloc1 00:04:50.529 03:55:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.529 03:55:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.529 03:55:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.529 03:55:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:50.529 03:55:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.529 03:55:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:50.529 03:55:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:50.529 03:55:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:50.529 03:55:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:50.529 03:55:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:50.529 03:55:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:50.529 03:55:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:50.529 03:55:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:50.529 03:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:50.529 03:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:50.529 03:55:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:51.096 /dev/nbd0 00:04:51.096 03:55:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:51.096 03:55:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:51.096 03:55:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:51.096 03:55:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:51.096 03:55:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:51.096 03:55:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:51.096 03:55:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:51.096 03:55:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:51.096 03:55:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:51.096 03:55:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:51.096 03:55:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.096 1+0 records in 00:04:51.096 1+0 records out 00:04:51.096 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177867 s, 23.0 MB/s 00:04:51.096 03:55:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.096 03:55:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:51.096 03:55:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.096 03:55:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:51.096 03:55:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:51.096 03:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.096 03:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.096 03:55:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:51.354 /dev/nbd1 00:04:51.354 03:55:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:51.354 03:55:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:51.354 03:55:19 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:51.354 03:55:19 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:51.354 03:55:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:51.354 03:55:19 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:51.354 03:55:19 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:51.354 03:55:19 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:51.354 03:55:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:51.354 03:55:19 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:51.354 03:55:19 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:51.354 1+0 records in 00:04:51.354 1+0 records out 00:04:51.354 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225209 s, 18.2 MB/s 00:04:51.354 03:55:19 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.354 03:55:19 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:51.354 03:55:19 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:51.354 03:55:19 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:51.354 03:55:19 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:51.354 03:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:51.354 03:55:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:51.354 03:55:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:51.354 03:55:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.354 03:55:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:51.613 { 00:04:51.613 "nbd_device": "/dev/nbd0", 00:04:51.613 "bdev_name": "Malloc0" 00:04:51.613 }, 00:04:51.613 { 00:04:51.613 "nbd_device": "/dev/nbd1", 00:04:51.613 "bdev_name": "Malloc1" 00:04:51.613 } 00:04:51.613 ]' 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:51.613 { 00:04:51.613 "nbd_device": "/dev/nbd0", 00:04:51.613 "bdev_name": "Malloc0" 00:04:51.613 }, 00:04:51.613 { 00:04:51.613 "nbd_device": "/dev/nbd1", 00:04:51.613 "bdev_name": "Malloc1" 00:04:51.613 } 00:04:51.613 ]' 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:51.613 /dev/nbd1' 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:51.613 /dev/nbd1' 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:51.613 256+0 records in 00:04:51.613 256+0 records out 00:04:51.613 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496244 s, 211 MB/s 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:51.613 256+0 records in 00:04:51.613 256+0 records out 00:04:51.613 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209895 s, 50.0 MB/s 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:51.613 256+0 records in 00:04:51.613 256+0 records out 00:04:51.613 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251976 s, 41.6 MB/s 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.613 03:55:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:51.871 03:55:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:51.871 03:55:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:51.871 03:55:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:51.871 03:55:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:51.871 03:55:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:51.871 03:55:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:51.871 03:55:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:51.871 03:55:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:51.871 03:55:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:51.871 03:55:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:52.438 03:55:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:52.438 03:55:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:52.438 03:55:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:52.438 03:55:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:52.438 03:55:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:52.438 03:55:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:52.438 03:55:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:52.438 03:55:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:52.438 03:55:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.438 03:55:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.438 03:55:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.438 03:55:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:52.438 03:55:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:52.438 03:55:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.696 03:55:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:52.696 03:55:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:52.696 03:55:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.696 03:55:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:52.696 03:55:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:52.696 03:55:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:52.696 03:55:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:52.696 03:55:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:52.696 03:55:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:52.696 03:55:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:52.955 03:55:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:53.214 [2024-12-09 03:55:21.564038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:53.214 [2024-12-09 03:55:21.622114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.214 [2024-12-09 03:55:21.622114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.214 [2024-12-09 03:55:21.676310] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:53.214 [2024-12-09 03:55:21.676383] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:56.493 03:55:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:56.493 03:55:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:56.493 spdk_app_start Round 2 00:04:56.493 03:55:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 109211 /var/tmp/spdk-nbd.sock 00:04:56.493 03:55:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 109211 ']' 00:04:56.493 03:55:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.493 03:55:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.493 03:55:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.493 03:55:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.493 03:55:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:56.493 03:55:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.493 03:55:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:56.493 03:55:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.493 Malloc0 00:04:56.493 03:55:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:56.750 Malloc1 00:04:56.750 03:55:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.750 03:55:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.750 03:55:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.750 03:55:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:56.750 03:55:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.750 03:55:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:56.750 03:55:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:56.750 03:55:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.750 03:55:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.750 03:55:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:56.750 03:55:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.750 03:55:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:56.750 03:55:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:56.750 03:55:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:56.750 03:55:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:56.750 03:55:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:57.006 /dev/nbd0 00:04:57.006 03:55:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:57.006 03:55:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:57.006 03:55:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:57.006 03:55:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:57.006 03:55:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:57.006 03:55:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:57.006 03:55:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:57.006 03:55:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:57.006 03:55:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:57.006 03:55:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:57.006 03:55:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.006 1+0 records in 00:04:57.006 1+0 records out 00:04:57.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000170438 s, 24.0 MB/s 00:04:57.006 03:55:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.006 03:55:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:57.006 03:55:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.006 03:55:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:57.006 03:55:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:57.006 03:55:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.006 03:55:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.006 03:55:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:57.263 /dev/nbd1 00:04:57.263 03:55:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:57.263 03:55:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:57.263 03:55:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:57.263 03:55:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:57.263 03:55:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:57.263 03:55:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:57.263 03:55:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:57.520 03:55:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:57.520 03:55:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:57.520 03:55:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:57.520 03:55:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:57.520 1+0 records in 00:04:57.520 1+0 records out 00:04:57.520 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204611 s, 20.0 MB/s 00:04:57.520 03:55:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.520 03:55:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:57.520 03:55:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:57.520 03:55:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:57.520 03:55:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:57.520 03:55:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:57.520 03:55:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.520 03:55:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:57.520 03:55:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.520 03:55:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:57.777 03:55:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:57.777 { 00:04:57.777 "nbd_device": "/dev/nbd0", 00:04:57.777 "bdev_name": "Malloc0" 00:04:57.777 }, 00:04:57.777 { 00:04:57.777 "nbd_device": "/dev/nbd1", 00:04:57.777 "bdev_name": "Malloc1" 00:04:57.777 } 00:04:57.777 ]' 00:04:57.777 03:55:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:57.777 { 00:04:57.777 "nbd_device": "/dev/nbd0", 00:04:57.777 "bdev_name": "Malloc0" 00:04:57.777 }, 00:04:57.777 { 00:04:57.777 "nbd_device": "/dev/nbd1", 00:04:57.777 "bdev_name": "Malloc1" 00:04:57.777 } 00:04:57.777 ]' 00:04:57.777 03:55:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:57.777 03:55:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:57.777 /dev/nbd1' 00:04:57.777 03:55:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:57.777 /dev/nbd1' 00:04:57.777 03:55:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:57.777 03:55:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:57.777 03:55:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:57.777 03:55:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:57.777 03:55:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:57.777 03:55:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:57.777 03:55:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.777 03:55:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.777 03:55:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:57.777 03:55:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.777 03:55:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:57.777 03:55:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:57.777 256+0 records in 00:04:57.777 256+0 records out 00:04:57.777 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00480454 s, 218 MB/s 00:04:57.777 03:55:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:57.778 256+0 records in 00:04:57.778 256+0 records out 00:04:57.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232653 s, 45.1 MB/s 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:57.778 256+0 records in 00:04:57.778 256+0 records out 00:04:57.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0232975 s, 45.0 MB/s 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:57.778 03:55:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:58.035 03:55:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:58.035 03:55:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:58.035 03:55:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:58.035 03:55:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.035 03:55:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.035 03:55:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:58.035 03:55:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.036 03:55:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.036 03:55:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.036 03:55:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:58.293 03:55:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:58.293 03:55:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:58.293 03:55:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:58.293 03:55:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.293 03:55:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.293 03:55:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:58.293 03:55:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.293 03:55:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.293 03:55:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.293 03:55:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.293 03:55:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.551 03:55:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:58.551 03:55:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:58.551 03:55:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.809 03:55:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:58.809 03:55:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:58.809 03:55:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.809 03:55:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:58.809 03:55:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:58.809 03:55:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:58.809 03:55:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:58.809 03:55:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:58.809 03:55:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:58.810 03:55:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:59.068 03:55:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:59.326 [2024-12-09 03:55:27.651927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.326 [2024-12-09 03:55:27.706928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.326 [2024-12-09 03:55:27.706932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.326 [2024-12-09 03:55:27.763677] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:59.326 [2024-12-09 03:55:27.763745] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:02.612 03:55:30 event.app_repeat -- event/event.sh@38 -- # waitforlisten 109211 /var/tmp/spdk-nbd.sock 00:05:02.612 03:55:30 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 109211 ']' 00:05:02.612 03:55:30 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.612 03:55:30 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.612 03:55:30 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.612 03:55:30 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.612 03:55:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.612 03:55:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.612 03:55:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:02.612 03:55:30 event.app_repeat -- event/event.sh@39 -- # killprocess 109211 00:05:02.612 03:55:30 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 109211 ']' 00:05:02.612 03:55:30 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 109211 00:05:02.612 03:55:30 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:02.612 03:55:30 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.612 03:55:30 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109211 00:05:02.612 03:55:30 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.612 03:55:30 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.612 03:55:30 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109211' 00:05:02.612 killing process with pid 109211 00:05:02.612 03:55:30 event.app_repeat -- common/autotest_common.sh@973 -- # kill 109211 00:05:02.612 03:55:30 event.app_repeat -- common/autotest_common.sh@978 -- # wait 109211 00:05:02.612 spdk_app_start is called in Round 0. 00:05:02.612 Shutdown signal received, stop current app iteration 00:05:02.612 Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 reinitialization... 00:05:02.612 spdk_app_start is called in Round 1. 00:05:02.612 Shutdown signal received, stop current app iteration 00:05:02.612 Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 reinitialization... 00:05:02.612 spdk_app_start is called in Round 2. 00:05:02.612 Shutdown signal received, stop current app iteration 00:05:02.612 Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 reinitialization... 00:05:02.612 spdk_app_start is called in Round 3. 00:05:02.612 Shutdown signal received, stop current app iteration 00:05:02.612 03:55:30 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:02.612 03:55:30 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:02.612 00:05:02.612 real 0m18.777s 00:05:02.612 user 0m41.469s 00:05:02.612 sys 0m3.278s 00:05:02.612 03:55:30 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.612 03:55:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.612 ************************************ 00:05:02.612 END TEST app_repeat 00:05:02.612 ************************************ 00:05:02.612 03:55:30 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:02.612 03:55:30 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:02.612 03:55:30 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.612 03:55:30 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.612 03:55:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:02.612 ************************************ 00:05:02.612 START TEST cpu_locks 00:05:02.612 ************************************ 00:05:02.612 03:55:30 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:02.612 * Looking for test storage... 00:05:02.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:02.612 03:55:31 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:02.612 03:55:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:02.612 03:55:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:02.612 03:55:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.612 03:55:31 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:02.612 03:55:31 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.612 03:55:31 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:02.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.612 --rc genhtml_branch_coverage=1 00:05:02.612 --rc genhtml_function_coverage=1 00:05:02.612 --rc genhtml_legend=1 00:05:02.612 --rc geninfo_all_blocks=1 00:05:02.612 --rc geninfo_unexecuted_blocks=1 00:05:02.612 00:05:02.612 ' 00:05:02.612 03:55:31 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:02.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.612 --rc genhtml_branch_coverage=1 00:05:02.612 --rc genhtml_function_coverage=1 00:05:02.612 --rc genhtml_legend=1 00:05:02.612 --rc geninfo_all_blocks=1 00:05:02.612 --rc geninfo_unexecuted_blocks=1 00:05:02.612 00:05:02.612 ' 00:05:02.612 03:55:31 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:02.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.612 --rc genhtml_branch_coverage=1 00:05:02.612 --rc genhtml_function_coverage=1 00:05:02.612 --rc genhtml_legend=1 00:05:02.612 --rc geninfo_all_blocks=1 00:05:02.612 --rc geninfo_unexecuted_blocks=1 00:05:02.612 00:05:02.612 ' 00:05:02.612 03:55:31 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:02.612 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.612 --rc genhtml_branch_coverage=1 00:05:02.612 --rc genhtml_function_coverage=1 00:05:02.612 --rc genhtml_legend=1 00:05:02.612 --rc geninfo_all_blocks=1 00:05:02.612 --rc geninfo_unexecuted_blocks=1 00:05:02.612 00:05:02.612 ' 00:05:02.612 03:55:31 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:02.612 03:55:31 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:02.612 03:55:31 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:02.613 03:55:31 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:02.613 03:55:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.613 03:55:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.613 03:55:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.613 ************************************ 00:05:02.613 START TEST default_locks 00:05:02.613 ************************************ 00:05:02.613 03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:02.613 03:55:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=111591 00:05:02.613 03:55:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.613 03:55:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 111591 00:05:02.613 03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 111591 ']' 00:05:02.613 03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.613 03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.613 03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.613 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.613 03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.613 03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:02.872 [2024-12-09 03:55:31.214936] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:05:02.872 [2024-12-09 03:55:31.215036] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111591 ] 00:05:02.872 [2024-12-09 03:55:31.283952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.872 [2024-12-09 03:55:31.344668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.130 03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.130 03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:03.130 03:55:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 111591 00:05:03.130 03:55:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 111591 00:05:03.130 03:55:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:03.388 lslocks: write error 00:05:03.388 03:55:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 111591 00:05:03.388 03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 111591 ']' 00:05:03.388 03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 111591 00:05:03.388 03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:03.388 03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.388 03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111591 00:05:03.388 03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.388 03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.388 03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111591' 00:05:03.388 killing process with pid 111591 00:05:03.388 03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 111591 00:05:03.388 03:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 111591 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 111591 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 111591 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 111591 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 111591 ']' 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (111591) - No such process 00:05:03.954 ERROR: process (pid: 111591) is no longer running 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:03.954 00:05:03.954 real 0m1.196s 00:05:03.954 user 0m1.149s 00:05:03.954 sys 0m0.517s 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.954 03:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.954 ************************************ 00:05:03.954 END TEST default_locks 00:05:03.954 ************************************ 00:05:03.954 03:55:32 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:03.954 03:55:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:03.954 03:55:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:03.954 03:55:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.954 ************************************ 00:05:03.954 START TEST default_locks_via_rpc 00:05:03.954 ************************************ 00:05:03.954 03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:03.954 03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=111867 00:05:03.954 03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.954 03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 111867 00:05:03.954 03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 111867 ']' 00:05:03.954 03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.954 03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:03.954 03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.954 03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:03.954 03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.954 [2024-12-09 03:55:32.464002] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:05:03.954 [2024-12-09 03:55:32.464115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111867 ] 00:05:04.212 [2024-12-09 03:55:32.530718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.212 [2024-12-09 03:55:32.591046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.470 03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:04.470 03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:04.470 03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:04.470 03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.470 03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.470 03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.470 03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:04.470 03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:04.470 03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:04.470 03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:04.470 03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:04.470 03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:04.470 03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.470 03:55:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:04.470 03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 111867 00:05:04.470 03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 111867 00:05:04.470 03:55:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:04.727 03:55:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 111867 00:05:04.727 03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 111867 ']' 00:05:04.727 03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 111867 00:05:04.727 03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:04.727 03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:04.727 03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111867 00:05:04.727 03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:04.727 03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:04.727 03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111867' 00:05:04.727 killing process with pid 111867 00:05:04.727 03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 111867 00:05:04.727 03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 111867 00:05:04.985 00:05:04.985 real 0m1.148s 00:05:04.985 user 0m1.129s 00:05:04.985 sys 0m0.487s 00:05:04.985 03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.985 03:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.985 ************************************ 00:05:04.985 END TEST default_locks_via_rpc 00:05:04.985 ************************************ 00:05:05.243 03:55:33 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:05.243 03:55:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.243 03:55:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.243 03:55:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.243 ************************************ 00:05:05.243 START TEST non_locking_app_on_locked_coremask 00:05:05.243 ************************************ 00:05:05.243 03:55:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:05.243 03:55:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=112027 00:05:05.243 03:55:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.243 03:55:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 112027 /var/tmp/spdk.sock 00:05:05.243 03:55:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 112027 ']' 00:05:05.243 03:55:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.243 03:55:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.243 03:55:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.243 03:55:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.243 03:55:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.243 [2024-12-09 03:55:33.664755] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:05:05.243 [2024-12-09 03:55:33.664837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112027 ] 00:05:05.243 [2024-12-09 03:55:33.730588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.243 [2024-12-09 03:55:33.787026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.501 03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:05.501 03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:05.501 03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=112041 00:05:05.501 03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:05.501 03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 112041 /var/tmp/spdk2.sock 00:05:05.501 03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 112041 ']' 00:05:05.501 03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:05.501 03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.501 03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:05.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:05.501 03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.501 03:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:05.759 [2024-12-09 03:55:34.109974] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:05:05.759 [2024-12-09 03:55:34.110054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112041 ] 00:05:05.759 [2024-12-09 03:55:34.207576] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:05.759 [2024-12-09 03:55:34.207618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.759 [2024-12-09 03:55:34.323852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.691 03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.692 03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:06.692 03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 112027 00:05:06.692 03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 112027 00:05:06.692 03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:07.257 lslocks: write error 00:05:07.257 03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 112027 00:05:07.257 03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 112027 ']' 00:05:07.257 03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 112027 00:05:07.257 03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:07.257 03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.257 03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112027 00:05:07.257 03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.257 03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.257 03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112027' 00:05:07.257 killing process with pid 112027 00:05:07.257 03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 112027 00:05:07.257 03:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 112027 00:05:08.191 03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 112041 00:05:08.191 03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 112041 ']' 00:05:08.191 03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 112041 00:05:08.191 03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:08.191 03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.191 03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112041 00:05:08.191 03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.191 03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.191 03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112041' 00:05:08.191 killing process with pid 112041 00:05:08.191 03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 112041 00:05:08.191 03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 112041 00:05:08.450 00:05:08.450 real 0m3.257s 00:05:08.450 user 0m3.508s 00:05:08.450 sys 0m1.038s 00:05:08.450 03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.450 03:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.450 ************************************ 00:05:08.450 END TEST non_locking_app_on_locked_coremask 00:05:08.450 ************************************ 00:05:08.450 03:55:36 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:08.450 03:55:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.450 03:55:36 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.450 03:55:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:08.450 ************************************ 00:05:08.450 START TEST locking_app_on_unlocked_coremask 00:05:08.450 ************************************ 00:05:08.450 03:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:08.450 03:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=112431 00:05:08.450 03:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:08.450 03:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 112431 /var/tmp/spdk.sock 00:05:08.450 03:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 112431 ']' 00:05:08.450 03:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.450 03:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.450 03:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.450 03:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.450 03:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.450 [2024-12-09 03:55:36.975172] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:05:08.450 [2024-12-09 03:55:36.975303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112431 ] 00:05:08.709 [2024-12-09 03:55:37.042968] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:08.709 [2024-12-09 03:55:37.043004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.709 [2024-12-09 03:55:37.100503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.968 03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.968 03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:08.968 03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=112477 00:05:08.968 03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:08.968 03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 112477 /var/tmp/spdk2.sock 00:05:08.968 03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 112477 ']' 00:05:08.968 03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:08.968 03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.968 03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:08.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:08.968 03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.968 03:55:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:08.968 [2024-12-09 03:55:37.419059] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:05:08.968 [2024-12-09 03:55:37.419152] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112477 ] 00:05:08.968 [2024-12-09 03:55:37.520889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.227 [2024-12-09 03:55:37.633367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.162 03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.162 03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:10.162 03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 112477 00:05:10.162 03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 112477 00:05:10.162 03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:10.420 lslocks: write error 00:05:10.420 03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 112431 00:05:10.420 03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 112431 ']' 00:05:10.420 03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 112431 00:05:10.420 03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:10.420 03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.420 03:55:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112431 00:05:10.679 03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.679 03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.679 03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112431' 00:05:10.679 killing process with pid 112431 00:05:10.679 03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 112431 00:05:10.679 03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 112431 00:05:11.245 03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 112477 00:05:11.245 03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 112477 ']' 00:05:11.245 03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 112477 00:05:11.245 03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:11.245 03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.245 03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112477 00:05:11.504 03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.504 03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.504 03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112477' 00:05:11.504 killing process with pid 112477 00:05:11.504 03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 112477 00:05:11.504 03:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 112477 00:05:11.762 00:05:11.762 real 0m3.343s 00:05:11.762 user 0m3.587s 00:05:11.762 sys 0m1.067s 00:05:11.762 03:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:11.762 03:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.762 ************************************ 00:05:11.762 END TEST locking_app_on_unlocked_coremask 00:05:11.762 ************************************ 00:05:11.762 03:55:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:11.762 03:55:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.762 03:55:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.762 03:55:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:11.762 ************************************ 00:05:11.762 START TEST locking_app_on_locked_coremask 00:05:11.762 ************************************ 00:05:11.762 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:11.762 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=112855 00:05:11.762 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:11.762 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 112855 /var/tmp/spdk.sock 00:05:11.762 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 112855 ']' 00:05:11.762 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.762 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:11.762 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.762 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:11.762 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.020 [2024-12-09 03:55:40.371108] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:05:12.020 [2024-12-09 03:55:40.371187] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112855 ] 00:05:12.020 [2024-12-09 03:55:40.439244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.020 [2024-12-09 03:55:40.499574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.278 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.278 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:12.278 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=112913 00:05:12.278 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 112913 /var/tmp/spdk2.sock 00:05:12.279 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:12.279 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:12.279 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 112913 /var/tmp/spdk2.sock 00:05:12.279 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:12.279 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.279 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:12.279 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:12.279 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 112913 /var/tmp/spdk2.sock 00:05:12.279 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 112913 ']' 00:05:12.279 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.279 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.279 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.279 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.279 03:55:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.279 [2024-12-09 03:55:40.832918] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:05:12.279 [2024-12-09 03:55:40.832995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112913 ] 00:05:12.537 [2024-12-09 03:55:40.933172] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 112855 has claimed it. 00:05:12.537 [2024-12-09 03:55:40.933238] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:13.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (112913) - No such process 00:05:13.102 ERROR: process (pid: 112913) is no longer running 00:05:13.102 03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.102 03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:13.102 03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:13.102 03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:13.102 03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:13.102 03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:13.102 03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 112855 00:05:13.102 03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 112855 00:05:13.102 03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:13.377 lslocks: write error 00:05:13.377 03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 112855 00:05:13.377 03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 112855 ']' 00:05:13.377 03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 112855 00:05:13.377 03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:13.377 03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.377 03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112855 00:05:13.634 03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.634 03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.634 03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112855' 00:05:13.634 killing process with pid 112855 00:05:13.634 03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 112855 00:05:13.634 03:55:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 112855 00:05:13.893 00:05:13.893 real 0m2.070s 00:05:13.893 user 0m2.263s 00:05:13.893 sys 0m0.673s 00:05:13.893 03:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.893 03:55:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.893 ************************************ 00:05:13.893 END TEST locking_app_on_locked_coremask 00:05:13.893 ************************************ 00:05:13.893 03:55:42 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:13.893 03:55:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:13.893 03:55:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.893 03:55:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:13.893 ************************************ 00:05:13.893 START TEST locking_overlapped_coremask 00:05:13.893 ************************************ 00:05:13.893 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:13.893 03:55:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=113086 00:05:13.893 03:55:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:13.893 03:55:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 113086 /var/tmp/spdk.sock 00:05:13.893 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 113086 ']' 00:05:13.893 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.893 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.893 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.893 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.893 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.153 [2024-12-09 03:55:42.493214] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:05:14.153 [2024-12-09 03:55:42.493316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113086 ] 00:05:14.153 [2024-12-09 03:55:42.561225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:14.153 [2024-12-09 03:55:42.623330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.153 [2024-12-09 03:55:42.623366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.153 [2024-12-09 03:55:42.623369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.412 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.412 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:14.412 03:55:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=113211 00:05:14.412 03:55:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:14.412 03:55:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 113211 /var/tmp/spdk2.sock 00:05:14.412 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:14.412 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 113211 /var/tmp/spdk2.sock 00:05:14.412 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:14.412 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.412 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:14.412 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.412 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 113211 /var/tmp/spdk2.sock 00:05:14.412 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 113211 ']' 00:05:14.412 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.412 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.412 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.412 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.412 03:55:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.412 [2024-12-09 03:55:42.961241] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:05:14.412 [2024-12-09 03:55:42.961343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113211 ] 00:05:14.670 [2024-12-09 03:55:43.066622] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 113086 has claimed it. 00:05:14.670 [2024-12-09 03:55:43.066690] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:15.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (113211) - No such process 00:05:15.237 ERROR: process (pid: 113211) is no longer running 00:05:15.237 03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.237 03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:15.237 03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:15.237 03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:15.237 03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:15.237 03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:15.237 03:55:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:15.237 03:55:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:15.237 03:55:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:15.237 03:55:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:15.237 03:55:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 113086 00:05:15.237 03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 113086 ']' 00:05:15.237 03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 113086 00:05:15.237 03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:15.237 03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.237 03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113086 00:05:15.237 03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.237 03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.237 03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113086' 00:05:15.237 killing process with pid 113086 00:05:15.237 03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 113086 00:05:15.237 03:55:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 113086 00:05:15.805 00:05:15.805 real 0m1.691s 00:05:15.805 user 0m4.698s 00:05:15.805 sys 0m0.474s 00:05:15.805 03:55:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.805 03:55:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.805 ************************************ 00:05:15.805 END TEST locking_overlapped_coremask 00:05:15.805 ************************************ 00:05:15.805 03:55:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:15.805 03:55:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.805 03:55:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.805 03:55:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.805 ************************************ 00:05:15.805 START TEST locking_overlapped_coremask_via_rpc 00:05:15.805 ************************************ 00:05:15.805 03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:15.805 03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=113381 00:05:15.805 03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:15.805 03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 113381 /var/tmp/spdk.sock 00:05:15.805 03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 113381 ']' 00:05:15.805 03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.805 03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.805 03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.805 03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.805 03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.805 [2024-12-09 03:55:44.234403] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:05:15.805 [2024-12-09 03:55:44.234485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113381 ] 00:05:15.805 [2024-12-09 03:55:44.297445] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.805 [2024-12-09 03:55:44.297475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:15.805 [2024-12-09 03:55:44.352289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.805 [2024-12-09 03:55:44.352343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.805 [2024-12-09 03:55:44.352348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.064 03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.064 03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:16.064 03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=113392 00:05:16.064 03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:16.064 03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 113392 /var/tmp/spdk2.sock 00:05:16.064 03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 113392 ']' 00:05:16.064 03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.064 03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.064 03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.064 03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.064 03:55:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.323 [2024-12-09 03:55:44.681984] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:05:16.323 [2024-12-09 03:55:44.682069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113392 ] 00:05:16.323 [2024-12-09 03:55:44.785432] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:16.323 [2024-12-09 03:55:44.785476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.581 [2024-12-09 03:55:44.911948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:16.581 [2024-12-09 03:55:44.915330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:16.581 [2024-12-09 03:55:44.915333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.146 [2024-12-09 03:55:45.700382] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 113381 has claimed it. 00:05:17.146 request: 00:05:17.146 { 00:05:17.146 "method": "framework_enable_cpumask_locks", 00:05:17.146 "req_id": 1 00:05:17.146 } 00:05:17.146 Got JSON-RPC error response 00:05:17.146 response: 00:05:17.146 { 00:05:17.146 "code": -32603, 00:05:17.146 "message": "Failed to claim CPU core: 2" 00:05:17.146 } 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 113381 /var/tmp/spdk.sock 00:05:17.146 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 113381 ']' 00:05:17.147 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:17.147 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.147 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:17.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:17.147 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.147 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.711 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.711 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:17.711 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 113392 /var/tmp/spdk2.sock 00:05:17.711 03:55:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 113392 ']' 00:05:17.711 03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.711 03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:17.711 03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.711 03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:17.711 03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.711 03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.711 03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:17.711 03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:17.711 03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:17.711 03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:17.711 03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:17.711 00:05:17.711 real 0m2.087s 00:05:17.711 user 0m1.177s 00:05:17.711 sys 0m0.156s 00:05:17.711 03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.711 03:55:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.711 ************************************ 00:05:17.711 END TEST locking_overlapped_coremask_via_rpc 00:05:17.711 ************************************ 00:05:17.969 03:55:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:17.969 03:55:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 113381 ]] 00:05:17.969 03:55:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 113381 00:05:17.969 03:55:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 113381 ']' 00:05:17.969 03:55:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 113381 00:05:17.969 03:55:46 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:17.969 03:55:46 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.969 03:55:46 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113381 00:05:17.969 03:55:46 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.969 03:55:46 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.969 03:55:46 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113381' 00:05:17.969 killing process with pid 113381 00:05:17.969 03:55:46 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 113381 00:05:17.969 03:55:46 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 113381 00:05:18.227 03:55:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 113392 ]] 00:05:18.227 03:55:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 113392 00:05:18.227 03:55:46 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 113392 ']' 00:05:18.227 03:55:46 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 113392 00:05:18.227 03:55:46 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:18.227 03:55:46 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.227 03:55:46 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 113392 00:05:18.484 03:55:46 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:18.484 03:55:46 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:18.484 03:55:46 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 113392' 00:05:18.484 killing process with pid 113392 00:05:18.484 03:55:46 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 113392 00:05:18.484 03:55:46 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 113392 00:05:18.743 03:55:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:18.743 03:55:47 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:18.743 03:55:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 113381 ]] 00:05:18.743 03:55:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 113381 00:05:18.743 03:55:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 113381 ']' 00:05:18.743 03:55:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 113381 00:05:18.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (113381) - No such process 00:05:18.743 03:55:47 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 113381 is not found' 00:05:18.743 Process with pid 113381 is not found 00:05:18.743 03:55:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 113392 ]] 00:05:18.743 03:55:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 113392 00:05:18.743 03:55:47 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 113392 ']' 00:05:18.743 03:55:47 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 113392 00:05:18.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (113392) - No such process 00:05:18.743 03:55:47 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 113392 is not found' 00:05:18.743 Process with pid 113392 is not found 00:05:18.743 03:55:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:18.743 00:05:18.743 real 0m16.251s 00:05:18.743 user 0m29.454s 00:05:18.743 sys 0m5.374s 00:05:18.743 03:55:47 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.743 03:55:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.743 ************************************ 00:05:18.743 END TEST cpu_locks 00:05:18.743 ************************************ 00:05:18.743 00:05:18.743 real 0m40.958s 00:05:18.743 user 1m20.107s 00:05:18.743 sys 0m9.490s 00:05:18.743 03:55:47 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.743 03:55:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.743 ************************************ 00:05:18.743 END TEST event 00:05:18.743 ************************************ 00:05:18.743 03:55:47 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:18.743 03:55:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.743 03:55:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.743 03:55:47 -- common/autotest_common.sh@10 -- # set +x 00:05:18.743 ************************************ 00:05:18.743 START TEST thread 00:05:18.743 ************************************ 00:05:18.743 03:55:47 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:19.002 * Looking for test storage... 00:05:19.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:19.002 03:55:47 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:19.002 03:55:47 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:19.002 03:55:47 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:19.003 03:55:47 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:19.003 03:55:47 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.003 03:55:47 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.003 03:55:47 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.003 03:55:47 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.003 03:55:47 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.003 03:55:47 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.003 03:55:47 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.003 03:55:47 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.003 03:55:47 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.003 03:55:47 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.003 03:55:47 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.003 03:55:47 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:19.003 03:55:47 thread -- scripts/common.sh@345 -- # : 1 00:05:19.003 03:55:47 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.003 03:55:47 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.003 03:55:47 thread -- scripts/common.sh@365 -- # decimal 1 00:05:19.003 03:55:47 thread -- scripts/common.sh@353 -- # local d=1 00:05:19.003 03:55:47 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.003 03:55:47 thread -- scripts/common.sh@355 -- # echo 1 00:05:19.003 03:55:47 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.003 03:55:47 thread -- scripts/common.sh@366 -- # decimal 2 00:05:19.003 03:55:47 thread -- scripts/common.sh@353 -- # local d=2 00:05:19.003 03:55:47 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.003 03:55:47 thread -- scripts/common.sh@355 -- # echo 2 00:05:19.003 03:55:47 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.003 03:55:47 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.003 03:55:47 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.003 03:55:47 thread -- scripts/common.sh@368 -- # return 0 00:05:19.003 03:55:47 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.003 03:55:47 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:19.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.003 --rc genhtml_branch_coverage=1 00:05:19.003 --rc genhtml_function_coverage=1 00:05:19.003 --rc genhtml_legend=1 00:05:19.003 --rc geninfo_all_blocks=1 00:05:19.003 --rc geninfo_unexecuted_blocks=1 00:05:19.003 00:05:19.003 ' 00:05:19.003 03:55:47 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:19.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.003 --rc genhtml_branch_coverage=1 00:05:19.003 --rc genhtml_function_coverage=1 00:05:19.003 --rc genhtml_legend=1 00:05:19.003 --rc geninfo_all_blocks=1 00:05:19.003 --rc geninfo_unexecuted_blocks=1 00:05:19.003 00:05:19.003 ' 00:05:19.003 03:55:47 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:19.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.003 --rc genhtml_branch_coverage=1 00:05:19.003 --rc genhtml_function_coverage=1 00:05:19.003 --rc genhtml_legend=1 00:05:19.003 --rc geninfo_all_blocks=1 00:05:19.003 --rc geninfo_unexecuted_blocks=1 00:05:19.003 00:05:19.003 ' 00:05:19.003 03:55:47 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:19.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.003 --rc genhtml_branch_coverage=1 00:05:19.003 --rc genhtml_function_coverage=1 00:05:19.003 --rc genhtml_legend=1 00:05:19.003 --rc geninfo_all_blocks=1 00:05:19.003 --rc geninfo_unexecuted_blocks=1 00:05:19.003 00:05:19.003 ' 00:05:19.003 03:55:47 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:19.003 03:55:47 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:19.003 03:55:47 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.003 03:55:47 thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.003 ************************************ 00:05:19.003 START TEST thread_poller_perf 00:05:19.003 ************************************ 00:05:19.003 03:55:47 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:19.003 [2024-12-09 03:55:47.487603] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:05:19.003 [2024-12-09 03:55:47.487669] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid113889 ] 00:05:19.003 [2024-12-09 03:55:47.555118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.261 [2024-12-09 03:55:47.611962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.261 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:20.192 [2024-12-09T02:55:48.768Z] ====================================== 00:05:20.192 [2024-12-09T02:55:48.768Z] busy:2713055937 (cyc) 00:05:20.192 [2024-12-09T02:55:48.768Z] total_run_count: 367000 00:05:20.192 [2024-12-09T02:55:48.768Z] tsc_hz: 2700000000 (cyc) 00:05:20.192 [2024-12-09T02:55:48.768Z] ====================================== 00:05:20.192 [2024-12-09T02:55:48.768Z] poller_cost: 7392 (cyc), 2737 (nsec) 00:05:20.192 00:05:20.192 real 0m1.208s 00:05:20.192 user 0m1.132s 00:05:20.192 sys 0m0.071s 00:05:20.192 03:55:48 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.192 03:55:48 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.192 ************************************ 00:05:20.192 END TEST thread_poller_perf 00:05:20.192 ************************************ 00:05:20.192 03:55:48 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:20.192 03:55:48 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:20.192 03:55:48 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.192 03:55:48 thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.192 ************************************ 00:05:20.192 START TEST thread_poller_perf 00:05:20.192 ************************************ 00:05:20.192 03:55:48 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:20.192 [2024-12-09 03:55:48.751889] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:05:20.192 [2024-12-09 03:55:48.751955] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114041 ] 00:05:20.449 [2024-12-09 03:55:48.819169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.449 [2024-12-09 03:55:48.874404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.449 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:21.381 [2024-12-09T02:55:49.957Z] ====================================== 00:05:21.381 [2024-12-09T02:55:49.957Z] busy:2702088930 (cyc) 00:05:21.381 [2024-12-09T02:55:49.957Z] total_run_count: 4436000 00:05:21.381 [2024-12-09T02:55:49.957Z] tsc_hz: 2700000000 (cyc) 00:05:21.381 [2024-12-09T02:55:49.957Z] ====================================== 00:05:21.381 [2024-12-09T02:55:49.957Z] poller_cost: 609 (cyc), 225 (nsec) 00:05:21.381 00:05:21.381 real 0m1.202s 00:05:21.381 user 0m1.124s 00:05:21.381 sys 0m0.073s 00:05:21.381 03:55:49 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.381 03:55:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:21.381 ************************************ 00:05:21.381 END TEST thread_poller_perf 00:05:21.381 ************************************ 00:05:21.640 03:55:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:21.640 00:05:21.640 real 0m2.658s 00:05:21.640 user 0m2.398s 00:05:21.640 sys 0m0.265s 00:05:21.640 03:55:49 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:21.640 03:55:49 thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.640 ************************************ 00:05:21.640 END TEST thread 00:05:21.640 ************************************ 00:05:21.640 03:55:49 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:21.640 03:55:49 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:21.640 03:55:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:21.640 03:55:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:21.640 03:55:49 -- common/autotest_common.sh@10 -- # set +x 00:05:21.640 ************************************ 00:05:21.640 START TEST app_cmdline 00:05:21.640 ************************************ 00:05:21.640 03:55:50 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:21.640 * Looking for test storage... 00:05:21.640 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:21.640 03:55:50 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:21.640 03:55:50 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:21.640 03:55:50 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:21.640 03:55:50 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:21.640 03:55:50 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:21.640 03:55:50 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:21.640 03:55:50 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:21.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.640 --rc genhtml_branch_coverage=1 00:05:21.640 --rc genhtml_function_coverage=1 00:05:21.640 --rc genhtml_legend=1 00:05:21.640 --rc geninfo_all_blocks=1 00:05:21.640 --rc geninfo_unexecuted_blocks=1 00:05:21.640 00:05:21.640 ' 00:05:21.640 03:55:50 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:21.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.640 --rc genhtml_branch_coverage=1 00:05:21.640 --rc genhtml_function_coverage=1 00:05:21.640 --rc genhtml_legend=1 00:05:21.640 --rc geninfo_all_blocks=1 00:05:21.640 --rc geninfo_unexecuted_blocks=1 00:05:21.640 00:05:21.640 ' 00:05:21.640 03:55:50 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:21.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.640 --rc genhtml_branch_coverage=1 00:05:21.640 --rc genhtml_function_coverage=1 00:05:21.640 --rc genhtml_legend=1 00:05:21.640 --rc geninfo_all_blocks=1 00:05:21.640 --rc geninfo_unexecuted_blocks=1 00:05:21.640 00:05:21.640 ' 00:05:21.640 03:55:50 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:21.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:21.640 --rc genhtml_branch_coverage=1 00:05:21.640 --rc genhtml_function_coverage=1 00:05:21.640 --rc genhtml_legend=1 00:05:21.640 --rc geninfo_all_blocks=1 00:05:21.640 --rc geninfo_unexecuted_blocks=1 00:05:21.640 00:05:21.640 ' 00:05:21.640 03:55:50 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:21.640 03:55:50 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=114250 00:05:21.640 03:55:50 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:21.640 03:55:50 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 114250 00:05:21.640 03:55:50 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 114250 ']' 00:05:21.640 03:55:50 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.640 03:55:50 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.640 03:55:50 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.640 03:55:50 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.640 03:55:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:21.898 [2024-12-09 03:55:50.217448] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:05:21.899 [2024-12-09 03:55:50.217552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid114250 ] 00:05:21.899 [2024-12-09 03:55:50.287859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.899 [2024-12-09 03:55:50.349267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.157 03:55:50 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.157 03:55:50 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:22.157 03:55:50 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:22.417 { 00:05:22.417 "version": "SPDK v25.01-pre git sha1 c4269c6e2", 00:05:22.417 "fields": { 00:05:22.417 "major": 25, 00:05:22.417 "minor": 1, 00:05:22.417 "patch": 0, 00:05:22.417 "suffix": "-pre", 00:05:22.417 "commit": "c4269c6e2" 00:05:22.417 } 00:05:22.417 } 00:05:22.417 03:55:50 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:22.417 03:55:50 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:22.417 03:55:50 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:22.417 03:55:50 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:22.417 03:55:50 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:22.417 03:55:50 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.417 03:55:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:22.417 03:55:50 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:22.417 03:55:50 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:22.417 03:55:50 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:22.417 03:55:50 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:22.417 03:55:50 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:22.417 03:55:50 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:22.417 03:55:50 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:22.417 03:55:50 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:22.417 03:55:50 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:22.417 03:55:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.417 03:55:50 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:22.417 03:55:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.417 03:55:50 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:22.417 03:55:50 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.417 03:55:50 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:22.417 03:55:50 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:22.417 03:55:50 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:22.676 request: 00:05:22.676 { 00:05:22.676 "method": "env_dpdk_get_mem_stats", 00:05:22.676 "req_id": 1 00:05:22.676 } 00:05:22.676 Got JSON-RPC error response 00:05:22.676 response: 00:05:22.676 { 00:05:22.676 "code": -32601, 00:05:22.676 "message": "Method not found" 00:05:22.676 } 00:05:22.676 03:55:51 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:22.676 03:55:51 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:22.676 03:55:51 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:22.676 03:55:51 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:22.676 03:55:51 app_cmdline -- app/cmdline.sh@1 -- # killprocess 114250 00:05:22.676 03:55:51 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 114250 ']' 00:05:22.676 03:55:51 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 114250 00:05:22.676 03:55:51 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:22.676 03:55:51 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.676 03:55:51 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 114250 00:05:22.676 03:55:51 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.676 03:55:51 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.676 03:55:51 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 114250' 00:05:22.676 killing process with pid 114250 00:05:22.676 03:55:51 app_cmdline -- common/autotest_common.sh@973 -- # kill 114250 00:05:22.676 03:55:51 app_cmdline -- common/autotest_common.sh@978 -- # wait 114250 00:05:23.242 00:05:23.242 real 0m1.604s 00:05:23.242 user 0m1.990s 00:05:23.242 sys 0m0.480s 00:05:23.242 03:55:51 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.242 03:55:51 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:23.243 ************************************ 00:05:23.243 END TEST app_cmdline 00:05:23.243 ************************************ 00:05:23.243 03:55:51 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:23.243 03:55:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.243 03:55:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.243 03:55:51 -- common/autotest_common.sh@10 -- # set +x 00:05:23.243 ************************************ 00:05:23.243 START TEST version 00:05:23.243 ************************************ 00:05:23.243 03:55:51 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:23.243 * Looking for test storage... 00:05:23.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:23.243 03:55:51 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.243 03:55:51 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.243 03:55:51 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.243 03:55:51 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.243 03:55:51 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.243 03:55:51 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.243 03:55:51 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.243 03:55:51 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.243 03:55:51 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.243 03:55:51 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.243 03:55:51 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.243 03:55:51 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.243 03:55:51 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.243 03:55:51 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.243 03:55:51 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.243 03:55:51 version -- scripts/common.sh@344 -- # case "$op" in 00:05:23.243 03:55:51 version -- scripts/common.sh@345 -- # : 1 00:05:23.243 03:55:51 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.243 03:55:51 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.243 03:55:51 version -- scripts/common.sh@365 -- # decimal 1 00:05:23.243 03:55:51 version -- scripts/common.sh@353 -- # local d=1 00:05:23.243 03:55:51 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.243 03:55:51 version -- scripts/common.sh@355 -- # echo 1 00:05:23.243 03:55:51 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.243 03:55:51 version -- scripts/common.sh@366 -- # decimal 2 00:05:23.243 03:55:51 version -- scripts/common.sh@353 -- # local d=2 00:05:23.243 03:55:51 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.243 03:55:51 version -- scripts/common.sh@355 -- # echo 2 00:05:23.243 03:55:51 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.243 03:55:51 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.243 03:55:51 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.243 03:55:51 version -- scripts/common.sh@368 -- # return 0 00:05:23.243 03:55:51 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.243 03:55:51 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.243 --rc genhtml_branch_coverage=1 00:05:23.243 --rc genhtml_function_coverage=1 00:05:23.243 --rc genhtml_legend=1 00:05:23.243 --rc geninfo_all_blocks=1 00:05:23.243 --rc geninfo_unexecuted_blocks=1 00:05:23.243 00:05:23.243 ' 00:05:23.243 03:55:51 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.243 --rc genhtml_branch_coverage=1 00:05:23.243 --rc genhtml_function_coverage=1 00:05:23.243 --rc genhtml_legend=1 00:05:23.243 --rc geninfo_all_blocks=1 00:05:23.243 --rc geninfo_unexecuted_blocks=1 00:05:23.243 00:05:23.243 ' 00:05:23.243 03:55:51 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.243 --rc genhtml_branch_coverage=1 00:05:23.243 --rc genhtml_function_coverage=1 00:05:23.243 --rc genhtml_legend=1 00:05:23.243 --rc geninfo_all_blocks=1 00:05:23.243 --rc geninfo_unexecuted_blocks=1 00:05:23.243 00:05:23.243 ' 00:05:23.243 03:55:51 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.243 --rc genhtml_branch_coverage=1 00:05:23.243 --rc genhtml_function_coverage=1 00:05:23.243 --rc genhtml_legend=1 00:05:23.243 --rc geninfo_all_blocks=1 00:05:23.243 --rc geninfo_unexecuted_blocks=1 00:05:23.243 00:05:23.243 ' 00:05:23.243 03:55:51 version -- app/version.sh@17 -- # get_header_version major 00:05:23.243 03:55:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:23.243 03:55:51 version -- app/version.sh@14 -- # cut -f2 00:05:23.243 03:55:51 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.502 03:55:51 version -- app/version.sh@17 -- # major=25 00:05:23.502 03:55:51 version -- app/version.sh@18 -- # get_header_version minor 00:05:23.502 03:55:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:23.502 03:55:51 version -- app/version.sh@14 -- # cut -f2 00:05:23.502 03:55:51 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.502 03:55:51 version -- app/version.sh@18 -- # minor=1 00:05:23.502 03:55:51 version -- app/version.sh@19 -- # get_header_version patch 00:05:23.502 03:55:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:23.502 03:55:51 version -- app/version.sh@14 -- # cut -f2 00:05:23.502 03:55:51 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.502 03:55:51 version -- app/version.sh@19 -- # patch=0 00:05:23.502 03:55:51 version -- app/version.sh@20 -- # get_header_version suffix 00:05:23.502 03:55:51 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:23.502 03:55:51 version -- app/version.sh@14 -- # cut -f2 00:05:23.502 03:55:51 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.502 03:55:51 version -- app/version.sh@20 -- # suffix=-pre 00:05:23.502 03:55:51 version -- app/version.sh@22 -- # version=25.1 00:05:23.502 03:55:51 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:23.502 03:55:51 version -- app/version.sh@28 -- # version=25.1rc0 00:05:23.502 03:55:51 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:23.502 03:55:51 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:23.502 03:55:51 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:23.502 03:55:51 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:23.502 00:05:23.502 real 0m0.198s 00:05:23.502 user 0m0.137s 00:05:23.502 sys 0m0.086s 00:05:23.502 03:55:51 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.502 03:55:51 version -- common/autotest_common.sh@10 -- # set +x 00:05:23.502 ************************************ 00:05:23.502 END TEST version 00:05:23.502 ************************************ 00:05:23.502 03:55:51 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:23.502 03:55:51 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:23.502 03:55:51 -- spdk/autotest.sh@194 -- # uname -s 00:05:23.502 03:55:51 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:23.502 03:55:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:23.502 03:55:51 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:23.502 03:55:51 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:23.502 03:55:51 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:23.502 03:55:51 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:23.502 03:55:51 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:23.502 03:55:51 -- common/autotest_common.sh@10 -- # set +x 00:05:23.502 03:55:51 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:23.502 03:55:51 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:23.502 03:55:51 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:23.502 03:55:51 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:23.502 03:55:51 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:23.502 03:55:51 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:23.502 03:55:51 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:23.502 03:55:51 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:23.502 03:55:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.502 03:55:51 -- common/autotest_common.sh@10 -- # set +x 00:05:23.502 ************************************ 00:05:23.502 START TEST nvmf_tcp 00:05:23.502 ************************************ 00:05:23.502 03:55:51 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:23.502 * Looking for test storage... 00:05:23.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:23.502 03:55:52 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.502 03:55:52 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.502 03:55:52 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.762 03:55:52 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.762 03:55:52 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:23.762 03:55:52 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.762 03:55:52 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.762 --rc genhtml_branch_coverage=1 00:05:23.762 --rc genhtml_function_coverage=1 00:05:23.762 --rc genhtml_legend=1 00:05:23.762 --rc geninfo_all_blocks=1 00:05:23.762 --rc geninfo_unexecuted_blocks=1 00:05:23.762 00:05:23.762 ' 00:05:23.762 03:55:52 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.762 --rc genhtml_branch_coverage=1 00:05:23.762 --rc genhtml_function_coverage=1 00:05:23.762 --rc genhtml_legend=1 00:05:23.762 --rc geninfo_all_blocks=1 00:05:23.762 --rc geninfo_unexecuted_blocks=1 00:05:23.762 00:05:23.762 ' 00:05:23.762 03:55:52 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.762 --rc genhtml_branch_coverage=1 00:05:23.762 --rc genhtml_function_coverage=1 00:05:23.762 --rc genhtml_legend=1 00:05:23.762 --rc geninfo_all_blocks=1 00:05:23.762 --rc geninfo_unexecuted_blocks=1 00:05:23.762 00:05:23.762 ' 00:05:23.762 03:55:52 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.762 --rc genhtml_branch_coverage=1 00:05:23.762 --rc genhtml_function_coverage=1 00:05:23.762 --rc genhtml_legend=1 00:05:23.762 --rc geninfo_all_blocks=1 00:05:23.762 --rc geninfo_unexecuted_blocks=1 00:05:23.762 00:05:23.762 ' 00:05:23.762 03:55:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:23.762 03:55:52 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:23.762 03:55:52 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:23.762 03:55:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:23.762 03:55:52 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.762 03:55:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.762 ************************************ 00:05:23.762 START TEST nvmf_target_core 00:05:23.762 ************************************ 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:23.762 * Looking for test storage... 00:05:23.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:23.762 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.763 --rc genhtml_branch_coverage=1 00:05:23.763 --rc genhtml_function_coverage=1 00:05:23.763 --rc genhtml_legend=1 00:05:23.763 --rc geninfo_all_blocks=1 00:05:23.763 --rc geninfo_unexecuted_blocks=1 00:05:23.763 00:05:23.763 ' 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.763 --rc genhtml_branch_coverage=1 00:05:23.763 --rc genhtml_function_coverage=1 00:05:23.763 --rc genhtml_legend=1 00:05:23.763 --rc geninfo_all_blocks=1 00:05:23.763 --rc geninfo_unexecuted_blocks=1 00:05:23.763 00:05:23.763 ' 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.763 --rc genhtml_branch_coverage=1 00:05:23.763 --rc genhtml_function_coverage=1 00:05:23.763 --rc genhtml_legend=1 00:05:23.763 --rc geninfo_all_blocks=1 00:05:23.763 --rc geninfo_unexecuted_blocks=1 00:05:23.763 00:05:23.763 ' 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.763 --rc genhtml_branch_coverage=1 00:05:23.763 --rc genhtml_function_coverage=1 00:05:23.763 --rc genhtml_legend=1 00:05:23.763 --rc geninfo_all_blocks=1 00:05:23.763 --rc geninfo_unexecuted_blocks=1 00:05:23.763 00:05:23.763 ' 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:23.763 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:23.763 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:23.764 03:55:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:23.764 03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:23.764 03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.764 03:55:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:23.764 ************************************ 00:05:23.764 START TEST nvmf_abort 00:05:23.764 ************************************ 00:05:23.764 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:24.023 * Looking for test storage... 00:05:24.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.023 --rc genhtml_branch_coverage=1 00:05:24.023 --rc genhtml_function_coverage=1 00:05:24.023 --rc genhtml_legend=1 00:05:24.023 --rc geninfo_all_blocks=1 00:05:24.023 --rc geninfo_unexecuted_blocks=1 00:05:24.023 00:05:24.023 ' 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.023 --rc genhtml_branch_coverage=1 00:05:24.023 --rc genhtml_function_coverage=1 00:05:24.023 --rc genhtml_legend=1 00:05:24.023 --rc geninfo_all_blocks=1 00:05:24.023 --rc geninfo_unexecuted_blocks=1 00:05:24.023 00:05:24.023 ' 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.023 --rc genhtml_branch_coverage=1 00:05:24.023 --rc genhtml_function_coverage=1 00:05:24.023 --rc genhtml_legend=1 00:05:24.023 --rc geninfo_all_blocks=1 00:05:24.023 --rc geninfo_unexecuted_blocks=1 00:05:24.023 00:05:24.023 ' 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.023 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.023 --rc genhtml_branch_coverage=1 00:05:24.023 --rc genhtml_function_coverage=1 00:05:24.023 --rc genhtml_legend=1 00:05:24.023 --rc geninfo_all_blocks=1 00:05:24.023 --rc geninfo_unexecuted_blocks=1 00:05:24.023 00:05:24.023 ' 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.023 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:24.024 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:24.024 03:55:52 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:26.565 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:26.565 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:26.566 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:26.566 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:26.566 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:26.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:26.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:05:26.566 00:05:26.566 --- 10.0.0.2 ping statistics --- 00:05:26.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:26.566 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:26.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:26.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:05:26.566 00:05:26.566 --- 10.0.0.1 ping statistics --- 00:05:26.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:26.566 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=116354 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 116354 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 116354 ']' 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.566 03:55:54 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.566 [2024-12-09 03:55:54.840449] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:05:26.566 [2024-12-09 03:55:54.840527] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:26.566 [2024-12-09 03:55:54.918125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:26.566 [2024-12-09 03:55:54.979003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:26.566 [2024-12-09 03:55:54.979088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:26.566 [2024-12-09 03:55:54.979103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:26.566 [2024-12-09 03:55:54.979114] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:26.566 [2024-12-09 03:55:54.979124] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:26.566 [2024-12-09 03:55:54.980694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:26.566 [2024-12-09 03:55:54.980761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:26.566 [2024-12-09 03:55:54.980765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.566 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.566 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:26.566 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:26.566 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.566 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.566 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:26.567 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:26.567 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.567 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.567 [2024-12-09 03:55:55.133781] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:26.567 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.567 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:26.567 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.567 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.824 Malloc0 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.824 Delay0 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.824 [2024-12-09 03:55:55.211573] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.824 03:55:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:26.824 [2024-12-09 03:55:55.367412] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:29.353 Initializing NVMe Controllers 00:05:29.353 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:29.353 controller IO queue size 128 less than required 00:05:29.353 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:29.353 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:29.353 Initialization complete. Launching workers. 00:05:29.353 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29233 00:05:29.353 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29294, failed to submit 62 00:05:29.353 success 29237, unsuccessful 57, failed 0 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:29.353 rmmod nvme_tcp 00:05:29.353 rmmod nvme_fabrics 00:05:29.353 rmmod nvme_keyring 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 116354 ']' 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 116354 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 116354 ']' 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 116354 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116354 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116354' 00:05:29.353 killing process with pid 116354 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 116354 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 116354 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:29.353 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:29.354 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:29.354 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:29.354 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:29.354 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:29.354 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:29.354 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:29.354 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:29.354 03:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:31.889 03:55:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:31.889 00:05:31.889 real 0m7.567s 00:05:31.889 user 0m11.034s 00:05:31.889 sys 0m2.536s 00:05:31.889 03:55:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.889 03:55:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:31.889 ************************************ 00:05:31.889 END TEST nvmf_abort 00:05:31.889 ************************************ 00:05:31.889 03:55:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:31.889 03:55:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:31.889 03:55:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.889 03:55:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:31.889 ************************************ 00:05:31.889 START TEST nvmf_ns_hotplug_stress 00:05:31.889 ************************************ 00:05:31.889 03:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:31.889 * Looking for test storage... 00:05:31.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:31.889 03:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:31.889 03:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:05:31.890 03:55:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:31.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.890 --rc genhtml_branch_coverage=1 00:05:31.890 --rc genhtml_function_coverage=1 00:05:31.890 --rc genhtml_legend=1 00:05:31.890 --rc geninfo_all_blocks=1 00:05:31.890 --rc geninfo_unexecuted_blocks=1 00:05:31.890 00:05:31.890 ' 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:31.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.890 --rc genhtml_branch_coverage=1 00:05:31.890 --rc genhtml_function_coverage=1 00:05:31.890 --rc genhtml_legend=1 00:05:31.890 --rc geninfo_all_blocks=1 00:05:31.890 --rc geninfo_unexecuted_blocks=1 00:05:31.890 00:05:31.890 ' 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:31.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.890 --rc genhtml_branch_coverage=1 00:05:31.890 --rc genhtml_function_coverage=1 00:05:31.890 --rc genhtml_legend=1 00:05:31.890 --rc geninfo_all_blocks=1 00:05:31.890 --rc geninfo_unexecuted_blocks=1 00:05:31.890 00:05:31.890 ' 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:31.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.890 --rc genhtml_branch_coverage=1 00:05:31.890 --rc genhtml_function_coverage=1 00:05:31.890 --rc genhtml_legend=1 00:05:31.890 --rc geninfo_all_blocks=1 00:05:31.890 --rc geninfo_unexecuted_blocks=1 00:05:31.890 00:05:31.890 ' 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:31.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:31.890 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:31.891 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:31.891 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:31.891 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:31.891 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:31.891 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:31.891 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:31.891 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:31.891 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:31.891 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:31.891 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:31.891 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:31.891 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:31.891 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:31.891 03:56:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:33.797 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:33.797 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:33.797 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:33.797 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:33.797 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:34.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:34.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:05:34.056 00:05:34.056 --- 10.0.0.2 ping statistics --- 00:05:34.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:34.056 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:34.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:34.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:05:34.056 00:05:34.056 --- 10.0.0.1 ping statistics --- 00:05:34.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:34.056 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=118720 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 118720 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 118720 ']' 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.056 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:34.056 [2024-12-09 03:56:02.479678] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:05:34.056 [2024-12-09 03:56:02.479771] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:34.056 [2024-12-09 03:56:02.549411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.056 [2024-12-09 03:56:02.602577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:34.056 [2024-12-09 03:56:02.602635] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:34.056 [2024-12-09 03:56:02.602658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:34.056 [2024-12-09 03:56:02.602668] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:34.056 [2024-12-09 03:56:02.602678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:34.056 [2024-12-09 03:56:02.604143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.056 [2024-12-09 03:56:02.604251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.056 [2024-12-09 03:56:02.604254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.314 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.314 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:34.314 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:34.314 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:34.314 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:34.314 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:34.314 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:34.314 03:56:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:34.572 [2024-12-09 03:56:02.997570] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.572 03:56:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:34.830 03:56:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:35.087 [2024-12-09 03:56:03.532403] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:35.087 03:56:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:35.345 03:56:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:35.603 Malloc0 00:05:35.603 03:56:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:35.862 Delay0 00:05:35.862 03:56:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:36.120 03:56:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:36.378 NULL1 00:05:36.378 03:56:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:36.943 03:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=119139 00:05:36.943 03:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:36.943 03:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:36.943 03:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.943 03:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:37.199 03:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:37.199 03:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:37.456 true 00:05:37.456 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:37.456 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.021 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.021 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:38.021 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:38.278 true 00:05:38.278 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:38.278 03:56:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.534 03:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:38.790 03:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:38.790 03:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:39.047 true 00:05:39.304 03:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:39.304 03:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:40.258 Read completed with error (sct=0, sc=11) 00:05:40.258 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:40.515 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:40.515 03:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:40.773 true 00:05:40.773 03:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:40.773 03:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.029 03:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:41.285 03:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:41.285 03:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:41.542 true 00:05:41.542 03:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:41.542 03:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:41.799 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:42.057 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:42.057 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:42.315 true 00:05:42.315 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:42.315 03:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:43.250 03:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:43.250 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:43.508 03:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:43.508 03:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:43.765 true 00:05:43.765 03:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:43.766 03:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.023 03:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.281 03:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:44.281 03:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:44.540 true 00:05:44.540 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:44.540 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.799 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.057 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:45.057 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:45.315 true 00:05:45.315 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:45.315 03:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.708 03:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.708 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:46.966 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:46.966 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:46.966 true 00:05:47.223 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:47.223 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.481 03:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.739 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:47.739 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:47.997 true 00:05:47.997 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:47.997 03:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.931 03:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.931 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:48.931 03:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:48.931 03:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:49.188 true 00:05:49.188 03:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:49.188 03:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.445 03:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.702 03:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:49.702 03:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:50.267 true 00:05:50.267 03:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:50.267 03:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.199 03:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.199 03:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:51.199 03:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:51.457 true 00:05:51.457 03:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:51.457 03:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.714 03:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.971 03:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:51.971 03:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:52.536 true 00:05:52.536 03:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:52.536 03:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.536 03:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.793 03:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:52.793 03:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:53.051 true 00:05:53.051 03:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:53.051 03:56:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.426 03:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.426 03:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:54.426 03:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:54.685 true 00:05:54.685 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:54.685 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.944 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.202 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:55.202 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:55.460 true 00:05:55.460 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:55.460 03:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.719 03:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.977 03:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:55.977 03:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:56.235 true 00:05:56.235 03:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:56.235 03:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.609 03:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.609 03:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:57.609 03:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:57.867 true 00:05:57.867 03:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:57.867 03:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.123 03:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.380 03:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:58.380 03:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:58.637 true 00:05:58.637 03:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:58.637 03:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.894 03:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.151 03:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:59.152 03:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:59.409 true 00:05:59.409 03:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:05:59.409 03:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.341 03:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.598 03:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:00.598 03:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:00.856 true 00:06:00.856 03:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:06:00.856 03:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.114 03:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.371 03:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:01.371 03:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:01.637 true 00:06:01.895 03:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:06:01.895 03:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.152 03:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.409 03:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:02.409 03:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:02.667 true 00:06:02.667 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:06:02.667 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.599 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:03.600 03:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.858 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:03.858 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:04.116 true 00:06:04.116 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:06:04.116 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.374 03:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.632 03:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:04.632 03:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:04.890 true 00:06:04.890 03:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:06:04.890 03:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:05.825 03:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.083 03:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:06.083 03:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:06.342 true 00:06:06.342 03:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:06:06.342 03:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.600 03:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.858 03:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:06.858 03:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:07.115 true 00:06:07.115 03:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:06:07.115 03:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.115 Initializing NVMe Controllers 00:06:07.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:07.115 Controller IO queue size 128, less than required. 00:06:07.115 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:07.115 Controller IO queue size 128, less than required. 00:06:07.115 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:07.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:07.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:07.115 Initialization complete. Launching workers. 00:06:07.116 ======================================================== 00:06:07.116 Latency(us) 00:06:07.116 Device Information : IOPS MiB/s Average min max 00:06:07.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 433.29 0.21 109458.96 3131.70 1012564.34 00:06:07.116 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7844.77 3.83 16268.03 2950.24 360041.59 00:06:07.116 ======================================================== 00:06:07.116 Total : 8278.06 4.04 21145.86 2950.24 1012564.34 00:06:07.116 00:06:07.373 03:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.631 03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:07.631 03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:07.889 true 00:06:07.889 03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 119139 00:06:07.889 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (119139) - No such process 00:06:07.889 03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 119139 00:06:07.889 03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.147 03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:08.405 03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:08.405 03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:08.405 03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:08.405 03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.405 03:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:08.663 null0 00:06:08.663 03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.663 03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.663 03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:08.921 null1 00:06:08.921 03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:08.921 03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:08.921 03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:09.178 null2 00:06:09.178 03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:09.178 03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:09.178 03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:09.436 null3 00:06:09.436 03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:09.436 03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:09.436 03:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:09.693 null4 00:06:09.693 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:09.693 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:09.693 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:10.260 null5 00:06:10.260 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:10.260 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:10.260 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:10.260 null6 00:06:10.517 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:10.518 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:10.518 03:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:10.776 null7 00:06:10.776 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:10.776 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:10.776 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:10.776 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:10.776 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:10.776 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:10.776 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:10.776 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:10.776 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 123218 123219 123221 123223 123225 123227 123229 123231 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:10.777 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.035 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:11.035 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:11.036 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.036 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:11.036 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:11.036 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:11.036 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:11.036 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.295 03:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.553 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.553 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:11.553 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:11.553 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:11.553 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:11.553 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:11.553 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:11.553 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:11.811 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:12.069 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.069 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.069 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.069 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:12.069 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:12.069 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:12.069 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.069 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:12.636 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.636 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.636 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:12.636 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.636 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.636 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:12.636 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.636 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.636 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:12.636 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.636 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.636 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:12.636 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.636 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.636 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:12.636 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.637 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.637 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:12.637 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.637 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.637 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:12.637 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:12.637 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:12.637 03:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:12.896 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:12.896 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:12.896 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:12.896 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.896 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:12.896 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:12.896 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:12.896 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:13.154 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.155 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:13.413 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:13.413 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:13.413 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.413 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:13.413 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:13.413 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:13.413 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:13.413 03:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:13.672 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:13.931 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:13.931 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.931 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:13.931 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:13.931 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:13.931 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:13.931 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:13.931 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:14.190 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.190 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.190 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:14.190 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.190 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.190 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:14.190 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.190 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.190 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:14.190 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.190 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.190 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:14.190 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.190 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.190 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.190 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.191 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:14.191 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:14.191 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.191 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.191 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:14.191 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.191 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.191 03:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:14.449 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.707 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:14.707 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:14.707 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:14.707 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:14.707 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:14.707 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:14.707 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:14.966 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:15.225 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:15.225 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:15.225 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:15.225 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:15.225 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.225 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:15.225 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:15.225 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:15.484 03:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:15.743 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:15.743 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:15.743 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:15.743 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:15.743 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:15.743 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.743 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:15.743 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:16.001 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.002 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:16.568 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:16.568 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:16.568 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:16.568 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:16.568 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:16.568 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:16.568 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.568 03:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:16.826 rmmod nvme_tcp 00:06:16.826 rmmod nvme_fabrics 00:06:16.826 rmmod nvme_keyring 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 118720 ']' 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 118720 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 118720 ']' 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 118720 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118720 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118720' 00:06:16.826 killing process with pid 118720 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 118720 00:06:16.826 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 118720 00:06:17.084 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:17.084 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:17.084 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:17.084 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:17.084 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:17.084 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:17.084 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:17.084 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:17.084 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:17.084 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:17.084 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:17.084 03:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:18.994 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:18.994 00:06:18.994 real 0m47.648s 00:06:18.994 user 3m42.151s 00:06:18.994 sys 0m15.789s 00:06:18.994 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.994 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:18.994 ************************************ 00:06:18.994 END TEST nvmf_ns_hotplug_stress 00:06:18.994 ************************************ 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:19.254 ************************************ 00:06:19.254 START TEST nvmf_delete_subsystem 00:06:19.254 ************************************ 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:19.254 * Looking for test storage... 00:06:19.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.254 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:19.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.255 --rc genhtml_branch_coverage=1 00:06:19.255 --rc genhtml_function_coverage=1 00:06:19.255 --rc genhtml_legend=1 00:06:19.255 --rc geninfo_all_blocks=1 00:06:19.255 --rc geninfo_unexecuted_blocks=1 00:06:19.255 00:06:19.255 ' 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:19.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.255 --rc genhtml_branch_coverage=1 00:06:19.255 --rc genhtml_function_coverage=1 00:06:19.255 --rc genhtml_legend=1 00:06:19.255 --rc geninfo_all_blocks=1 00:06:19.255 --rc geninfo_unexecuted_blocks=1 00:06:19.255 00:06:19.255 ' 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:19.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.255 --rc genhtml_branch_coverage=1 00:06:19.255 --rc genhtml_function_coverage=1 00:06:19.255 --rc genhtml_legend=1 00:06:19.255 --rc geninfo_all_blocks=1 00:06:19.255 --rc geninfo_unexecuted_blocks=1 00:06:19.255 00:06:19.255 ' 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:19.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.255 --rc genhtml_branch_coverage=1 00:06:19.255 --rc genhtml_function_coverage=1 00:06:19.255 --rc genhtml_legend=1 00:06:19.255 --rc geninfo_all_blocks=1 00:06:19.255 --rc geninfo_unexecuted_blocks=1 00:06:19.255 00:06:19.255 ' 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.255 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:19.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:19.256 03:56:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:21.790 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:21.790 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:21.790 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:21.790 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:21.790 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:21.791 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:21.791 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:21.791 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:21.791 03:56:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:21.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:21.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:06:21.791 00:06:21.791 --- 10.0.0.2 ping statistics --- 00:06:21.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.791 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:21.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:21.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:06:21.791 00:06:21.791 --- 10.0.0.1 ping statistics --- 00:06:21.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.791 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=126122 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 126122 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 126122 ']' 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.791 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:21.791 [2024-12-09 03:56:50.253178] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:06:21.791 [2024-12-09 03:56:50.253278] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:21.791 [2024-12-09 03:56:50.328524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:22.049 [2024-12-09 03:56:50.386572] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:22.049 [2024-12-09 03:56:50.386638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:22.049 [2024-12-09 03:56:50.386661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:22.049 [2024-12-09 03:56:50.386672] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:22.049 [2024-12-09 03:56:50.386682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:22.049 [2024-12-09 03:56:50.388101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:22.049 [2024-12-09 03:56:50.388106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.049 [2024-12-09 03:56:50.539434] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.049 [2024-12-09 03:56:50.555701] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.049 NULL1 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.049 Delay0 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=126149 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:22.049 03:56:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:22.306 [2024-12-09 03:56:50.640480] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:24.203 03:56:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:24.203 03:56:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:24.203 03:56:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:24.203 Read completed with error (sct=0, sc=8) 00:06:24.203 Read completed with error (sct=0, sc=8) 00:06:24.203 starting I/O failed: -6 00:06:24.203 Write completed with error (sct=0, sc=8) 00:06:24.203 Read completed with error (sct=0, sc=8) 00:06:24.203 Read completed with error (sct=0, sc=8) 00:06:24.203 Read completed with error (sct=0, sc=8) 00:06:24.203 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 [2024-12-09 03:56:52.763669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181f4a0 is same with the state(6) to be set 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 starting I/O failed: -6 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.204 Write completed with error (sct=0, sc=8) 00:06:24.204 Read completed with error (sct=0, sc=8) 00:06:24.205 Write completed with error (sct=0, sc=8) 00:06:24.205 Read completed with error (sct=0, sc=8) 00:06:24.205 starting I/O failed: -6 00:06:24.205 Write completed with error (sct=0, sc=8) 00:06:24.205 Read completed with error (sct=0, sc=8) 00:06:24.205 Read completed with error (sct=0, sc=8) 00:06:24.205 starting I/O failed: -6 00:06:24.205 Write completed with error (sct=0, sc=8) 00:06:24.205 Read completed with error (sct=0, sc=8) 00:06:24.205 starting I/O failed: -6 00:06:24.205 Read completed with error (sct=0, sc=8) 00:06:24.205 Write completed with error (sct=0, sc=8) 00:06:24.205 starting I/O failed: -6 00:06:24.205 Read completed with error (sct=0, sc=8) 00:06:24.205 Read completed with error (sct=0, sc=8) 00:06:24.205 starting I/O failed: -6 00:06:24.205 Read completed with error (sct=0, sc=8) 00:06:24.205 Write completed with error (sct=0, sc=8) 00:06:24.205 starting I/O failed: -6 00:06:24.205 Write completed with error (sct=0, sc=8) 00:06:24.205 Read completed with error (sct=0, sc=8) 00:06:24.205 starting I/O failed: -6 00:06:24.205 Read completed with error (sct=0, sc=8) 00:06:24.205 Write completed with error (sct=0, sc=8) 00:06:24.205 starting I/O failed: -6 00:06:24.205 Read completed with error (sct=0, sc=8) 00:06:24.205 Write completed with error (sct=0, sc=8) 00:06:24.205 starting I/O failed: -6 00:06:24.205 Read completed with error (sct=0, sc=8) 00:06:24.205 Read completed with error (sct=0, sc=8) 00:06:24.205 starting I/O failed: -6 00:06:24.205 Write completed with error (sct=0, sc=8) 00:06:24.205 Read completed with error (sct=0, sc=8) 00:06:24.205 starting I/O failed: -6 00:06:24.205 Read completed with error (sct=0, sc=8) 00:06:24.205 Write completed with error (sct=0, sc=8) 00:06:24.205 starting I/O failed: -6 00:06:24.205 Read completed with error (sct=0, sc=8) 00:06:24.205 Read completed with error (sct=0, sc=8) 00:06:24.205 starting I/O failed: -6 00:06:24.205 Read completed with error (sct=0, sc=8) 00:06:24.205 starting I/O failed: -6 00:06:24.205 starting I/O failed: -6 00:06:24.205 starting I/O failed: -6 00:06:24.205 starting I/O failed: -6 00:06:24.205 starting I/O failed: -6 00:06:24.205 starting I/O failed: -6 00:06:24.205 starting I/O failed: -6 00:06:24.205 starting I/O failed: -6 00:06:24.205 starting I/O failed: -6 00:06:24.205 starting I/O failed: -6 00:06:24.205 starting I/O failed: -6 00:06:25.577 [2024-12-09 03:56:53.735876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18209b0 is same with the state(6) to be set 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 [2024-12-09 03:56:53.764598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181f2c0 is same with the state(6) to be set 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Write completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.577 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 [2024-12-09 03:56:53.764887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f99c000d7e0 is same with the state(6) to be set 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 [2024-12-09 03:56:53.765960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181f680 is same with the state(6) to be set 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Write completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 Read completed with error (sct=0, sc=8) 00:06:25.578 [2024-12-09 03:56:53.766203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f99c000d020 is same with the state(6) to be set 00:06:25.578 Initializing NVMe Controllers 00:06:25.578 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:25.578 Controller IO queue size 128, less than required. 00:06:25.578 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:25.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:25.578 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:25.578 Initialization complete. Launching workers. 00:06:25.578 ======================================================== 00:06:25.578 Latency(us) 00:06:25.578 Device Information : IOPS MiB/s Average min max 00:06:25.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.66 0.08 893018.98 842.21 1013070.99 00:06:25.578 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 182.58 0.09 918423.41 688.66 1013431.40 00:06:25.578 ======================================================== 00:06:25.578 Total : 354.24 0.17 906112.58 688.66 1013431.40 00:06:25.578 00:06:25.578 [2024-12-09 03:56:53.767378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18209b0 (9): Bad file descriptor 00:06:25.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:25.578 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.578 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:25.578 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 126149 00:06:25.578 03:56:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 126149 00:06:25.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (126149) - No such process 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 126149 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 126149 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 126149 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:25.836 [2024-12-09 03:56:54.290655] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=126672 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 126672 00:06:25.836 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:25.836 [2024-12-09 03:56:54.362780] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:26.402 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:26.402 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 126672 00:06:26.402 03:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:26.980 03:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:26.980 03:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 126672 00:06:26.980 03:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:27.237 03:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:27.237 03:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 126672 00:06:27.238 03:56:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:27.802 03:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:27.802 03:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 126672 00:06:27.802 03:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:28.366 03:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:28.366 03:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 126672 00:06:28.366 03:56:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:28.930 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:28.930 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 126672 00:06:28.930 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:29.188 Initializing NVMe Controllers 00:06:29.188 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:29.188 Controller IO queue size 128, less than required. 00:06:29.188 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:29.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:29.188 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:29.188 Initialization complete. Launching workers. 00:06:29.188 ======================================================== 00:06:29.188 Latency(us) 00:06:29.188 Device Information : IOPS MiB/s Average min max 00:06:29.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003487.61 1000207.82 1012338.90 00:06:29.188 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004248.73 1000166.81 1041578.45 00:06:29.188 ======================================================== 00:06:29.188 Total : 256.00 0.12 1003868.17 1000166.81 1041578.45 00:06:29.188 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 126672 00:06:29.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (126672) - No such process 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 126672 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:29.445 rmmod nvme_tcp 00:06:29.445 rmmod nvme_fabrics 00:06:29.445 rmmod nvme_keyring 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 126122 ']' 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 126122 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 126122 ']' 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 126122 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126122 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126122' 00:06:29.445 killing process with pid 126122 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 126122 00:06:29.445 03:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 126122 00:06:29.705 03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:29.705 03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:29.705 03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:29.705 03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:29.705 03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:29.705 03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:29.705 03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:29.705 03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:29.705 03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:29.705 03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:29.705 03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:29.705 03:56:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.634 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:31.634 00:06:31.634 real 0m12.560s 00:06:31.634 user 0m27.932s 00:06:31.634 sys 0m3.020s 00:06:31.634 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.634 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:31.634 ************************************ 00:06:31.634 END TEST nvmf_delete_subsystem 00:06:31.634 ************************************ 00:06:31.634 03:57:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:31.634 03:57:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:31.634 03:57:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.634 03:57:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:31.892 ************************************ 00:06:31.892 START TEST nvmf_host_management 00:06:31.892 ************************************ 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:31.892 * Looking for test storage... 00:06:31.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.892 --rc genhtml_branch_coverage=1 00:06:31.892 --rc genhtml_function_coverage=1 00:06:31.892 --rc genhtml_legend=1 00:06:31.892 --rc geninfo_all_blocks=1 00:06:31.892 --rc geninfo_unexecuted_blocks=1 00:06:31.892 00:06:31.892 ' 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.892 --rc genhtml_branch_coverage=1 00:06:31.892 --rc genhtml_function_coverage=1 00:06:31.892 --rc genhtml_legend=1 00:06:31.892 --rc geninfo_all_blocks=1 00:06:31.892 --rc geninfo_unexecuted_blocks=1 00:06:31.892 00:06:31.892 ' 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.892 --rc genhtml_branch_coverage=1 00:06:31.892 --rc genhtml_function_coverage=1 00:06:31.892 --rc genhtml_legend=1 00:06:31.892 --rc geninfo_all_blocks=1 00:06:31.892 --rc geninfo_unexecuted_blocks=1 00:06:31.892 00:06:31.892 ' 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.892 --rc genhtml_branch_coverage=1 00:06:31.892 --rc genhtml_function_coverage=1 00:06:31.892 --rc genhtml_legend=1 00:06:31.892 --rc geninfo_all_blocks=1 00:06:31.892 --rc geninfo_unexecuted_blocks=1 00:06:31.892 00:06:31.892 ' 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:31.892 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.893 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.893 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.893 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:31.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:31.893 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:31.893 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:31.893 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:31.893 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:31.893 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:31.893 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:31.893 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:31.893 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:31.893 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:31.893 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:31.893 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:31.893 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.893 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:31.893 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.893 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:31.893 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:31.893 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:31.893 03:57:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:34.436 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:34.436 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:34.436 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:34.436 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:34.436 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:34.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:34.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:06:34.437 00:06:34.437 --- 10.0.0.2 ping statistics --- 00:06:34.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.437 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:34.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:34.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:06:34.437 00:06:34.437 --- 10.0.0.1 ping statistics --- 00:06:34.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.437 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=129140 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 129140 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 129140 ']' 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.437 [2024-12-09 03:57:02.656055] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:06:34.437 [2024-12-09 03:57:02.656139] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.437 [2024-12-09 03:57:02.729189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.437 [2024-12-09 03:57:02.785226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:34.437 [2024-12-09 03:57:02.785299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:34.437 [2024-12-09 03:57:02.785328] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:34.437 [2024-12-09 03:57:02.785339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:34.437 [2024-12-09 03:57:02.785348] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:34.437 [2024-12-09 03:57:02.786805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.437 [2024-12-09 03:57:02.786867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.437 [2024-12-09 03:57:02.786934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:34.437 [2024-12-09 03:57:02.786937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.437 [2024-12-09 03:57:02.923429] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.437 03:57:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.437 Malloc0 00:06:34.437 [2024-12-09 03:57:02.995381] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:34.437 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.437 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:34.437 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:34.437 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.696 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=129187 00:06:34.696 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 129187 /var/tmp/bdevperf.sock 00:06:34.696 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 129187 ']' 00:06:34.696 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:34.696 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:34.696 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:34.696 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.696 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:34.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:34.696 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:34.696 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.696 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:34.696 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.696 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:34.696 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:34.696 { 00:06:34.696 "params": { 00:06:34.696 "name": "Nvme$subsystem", 00:06:34.696 "trtype": "$TEST_TRANSPORT", 00:06:34.696 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:34.696 "adrfam": "ipv4", 00:06:34.696 "trsvcid": "$NVMF_PORT", 00:06:34.696 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:34.696 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:34.696 "hdgst": ${hdgst:-false}, 00:06:34.696 "ddgst": ${ddgst:-false} 00:06:34.696 }, 00:06:34.696 "method": "bdev_nvme_attach_controller" 00:06:34.696 } 00:06:34.696 EOF 00:06:34.696 )") 00:06:34.696 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:34.696 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:34.696 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:34.696 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:34.696 "params": { 00:06:34.696 "name": "Nvme0", 00:06:34.696 "trtype": "tcp", 00:06:34.696 "traddr": "10.0.0.2", 00:06:34.696 "adrfam": "ipv4", 00:06:34.696 "trsvcid": "4420", 00:06:34.696 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:34.696 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:34.696 "hdgst": false, 00:06:34.696 "ddgst": false 00:06:34.696 }, 00:06:34.696 "method": "bdev_nvme_attach_controller" 00:06:34.696 }' 00:06:34.696 [2024-12-09 03:57:03.080015] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:06:34.696 [2024-12-09 03:57:03.080107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129187 ] 00:06:34.696 [2024-12-09 03:57:03.150705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.696 [2024-12-09 03:57:03.210965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.955 Running I/O for 10 seconds... 00:06:34.955 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.955 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:34.955 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:34.955 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.955 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.955 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.955 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:34.956 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:34.956 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:34.956 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:34.956 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:34.956 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:34.956 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:34.956 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:34.956 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:34.956 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:34.956 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:34.956 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:34.956 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:34.956 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:34.956 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:34.956 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:35.214 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:35.214 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:35.214 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:35.215 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:35.215 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.215 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.215 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.475 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:06:35.475 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:06:35.475 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:35.475 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:35.475 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:35.475 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:35.475 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.475 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.475 [2024-12-09 03:57:03.802055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 [2024-12-09 03:57:03.802463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb965b0 is same with the state(6) to be set 00:06:35.475 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.475 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:35.475 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.475 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:35.475 [2024-12-09 03:57:03.807241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.475 [2024-12-09 03:57:03.807291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.475 [2024-12-09 03:57:03.807311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.475 [2024-12-09 03:57:03.807334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.475 [2024-12-09 03:57:03.807349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.475 [2024-12-09 03:57:03.807362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.807376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:35.476 [2024-12-09 03:57:03.807389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.807402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0e660 is same with the state(6) to be set 00:06:35.476 [2024-12-09 03:57:03.807745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.807773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.807815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.807831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.807848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.807862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.807877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.807891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.807906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.807920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.807935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.807949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.807964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.807978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.807993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.476 [2024-12-09 03:57:03.808933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.476 [2024-12-09 03:57:03.808946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.808960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.808973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.808989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.809742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:35.477 [2024-12-09 03:57:03.809755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:35.477 [2024-12-09 03:57:03.810938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:35.477 task offset: 81920 on job bdev=Nvme0n1 fails 00:06:35.477 00:06:35.477 Latency(us) 00:06:35.477 [2024-12-09T02:57:04.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:35.477 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:35.477 Job: Nvme0n1 ended in about 0.41 seconds with error 00:06:35.477 Verification LBA range: start 0x0 length 0x400 00:06:35.477 Nvme0n1 : 0.41 1565.74 97.86 156.57 0.00 36109.85 2912.71 34564.17 00:06:35.477 [2024-12-09T02:57:04.053Z] =================================================================================================================== 00:06:35.477 [2024-12-09T02:57:04.053Z] Total : 1565.74 97.86 156.57 0.00 36109.85 2912.71 34564.17 00:06:35.477 [2024-12-09 03:57:03.812840] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.477 [2024-12-09 03:57:03.812868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e0e660 (9): Bad file descriptor 00:06:35.477 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.477 03:57:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:35.477 [2024-12-09 03:57:03.860731] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:36.412 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 129187 00:06:36.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (129187) - No such process 00:06:36.412 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:36.412 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:36.412 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:36.412 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:36.412 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:36.412 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:36.412 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:36.412 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:36.412 { 00:06:36.412 "params": { 00:06:36.412 "name": "Nvme$subsystem", 00:06:36.412 "trtype": "$TEST_TRANSPORT", 00:06:36.412 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:36.412 "adrfam": "ipv4", 00:06:36.412 "trsvcid": "$NVMF_PORT", 00:06:36.412 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:36.412 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:36.412 "hdgst": ${hdgst:-false}, 00:06:36.412 "ddgst": ${ddgst:-false} 00:06:36.412 }, 00:06:36.412 "method": "bdev_nvme_attach_controller" 00:06:36.412 } 00:06:36.412 EOF 00:06:36.412 )") 00:06:36.412 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:36.412 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:36.412 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:36.412 03:57:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:36.412 "params": { 00:06:36.412 "name": "Nvme0", 00:06:36.412 "trtype": "tcp", 00:06:36.412 "traddr": "10.0.0.2", 00:06:36.412 "adrfam": "ipv4", 00:06:36.412 "trsvcid": "4420", 00:06:36.412 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:36.412 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:36.412 "hdgst": false, 00:06:36.412 "ddgst": false 00:06:36.412 }, 00:06:36.412 "method": "bdev_nvme_attach_controller" 00:06:36.412 }' 00:06:36.412 [2024-12-09 03:57:04.867121] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:06:36.412 [2024-12-09 03:57:04.867189] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129464 ] 00:06:36.412 [2024-12-09 03:57:04.936345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.671 [2024-12-09 03:57:04.996560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.929 Running I/O for 1 seconds... 00:06:37.864 1664.00 IOPS, 104.00 MiB/s 00:06:37.864 Latency(us) 00:06:37.864 [2024-12-09T02:57:06.440Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:37.864 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:37.864 Verification LBA range: start 0x0 length 0x400 00:06:37.864 Nvme0n1 : 1.01 1703.09 106.44 0.00 0.00 36966.69 7233.23 33204.91 00:06:37.864 [2024-12-09T02:57:06.440Z] =================================================================================================================== 00:06:37.864 [2024-12-09T02:57:06.440Z] Total : 1703.09 106.44 0.00 0.00 36966.69 7233.23 33204.91 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:38.123 rmmod nvme_tcp 00:06:38.123 rmmod nvme_fabrics 00:06:38.123 rmmod nvme_keyring 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 129140 ']' 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 129140 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 129140 ']' 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 129140 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 129140 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 129140' 00:06:38.123 killing process with pid 129140 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 129140 00:06:38.123 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 129140 00:06:38.383 [2024-12-09 03:57:06.917743] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:38.383 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:38.383 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:38.383 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:38.383 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:38.383 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:38.383 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:38.383 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:38.383 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:38.383 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:38.383 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.383 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:38.383 03:57:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.916 03:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:40.916 03:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:40.916 00:06:40.916 real 0m8.771s 00:06:40.916 user 0m19.761s 00:06:40.916 sys 0m2.693s 00:06:40.916 03:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.916 03:57:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.916 ************************************ 00:06:40.916 END TEST nvmf_host_management 00:06:40.916 ************************************ 00:06:40.916 03:57:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:40.916 03:57:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:40.916 03:57:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.916 03:57:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:40.916 ************************************ 00:06:40.916 START TEST nvmf_lvol 00:06:40.916 ************************************ 00:06:40.916 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:40.916 * Looking for test storage... 00:06:40.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:40.916 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:40.916 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:06:40.916 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:40.916 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:40.916 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.916 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.916 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.916 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.916 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.916 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:40.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.917 --rc genhtml_branch_coverage=1 00:06:40.917 --rc genhtml_function_coverage=1 00:06:40.917 --rc genhtml_legend=1 00:06:40.917 --rc geninfo_all_blocks=1 00:06:40.917 --rc geninfo_unexecuted_blocks=1 00:06:40.917 00:06:40.917 ' 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:40.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.917 --rc genhtml_branch_coverage=1 00:06:40.917 --rc genhtml_function_coverage=1 00:06:40.917 --rc genhtml_legend=1 00:06:40.917 --rc geninfo_all_blocks=1 00:06:40.917 --rc geninfo_unexecuted_blocks=1 00:06:40.917 00:06:40.917 ' 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:40.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.917 --rc genhtml_branch_coverage=1 00:06:40.917 --rc genhtml_function_coverage=1 00:06:40.917 --rc genhtml_legend=1 00:06:40.917 --rc geninfo_all_blocks=1 00:06:40.917 --rc geninfo_unexecuted_blocks=1 00:06:40.917 00:06:40.917 ' 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:40.917 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.917 --rc genhtml_branch_coverage=1 00:06:40.917 --rc genhtml_function_coverage=1 00:06:40.917 --rc genhtml_legend=1 00:06:40.917 --rc geninfo_all_blocks=1 00:06:40.917 --rc geninfo_unexecuted_blocks=1 00:06:40.917 00:06:40.917 ' 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:40.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:40.917 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:40.918 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:40.918 03:57:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:42.823 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.823 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:06:42.823 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:42.823 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:42.823 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:42.823 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:42.824 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:42.824 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:42.824 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:42.824 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:42.824 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.825 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:42.825 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:42.825 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:42.825 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:42.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:06:42.825 00:06:42.825 --- 10.0.0.2 ping statistics --- 00:06:42.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.825 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:06:42.825 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:42.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:06:42.825 00:06:42.825 --- 10.0.0.1 ping statistics --- 00:06:42.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.825 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:06:42.825 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.825 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:06:42.825 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:42.825 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.825 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:42.825 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:42.825 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:43.083 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:43.083 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:43.083 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:43.083 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:43.083 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.083 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:43.083 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=132188 00:06:43.083 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:43.083 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 132188 00:06:43.083 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 132188 ']' 00:06:43.083 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.083 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.083 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.083 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.083 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:43.083 [2024-12-09 03:57:11.481399] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:06:43.083 [2024-12-09 03:57:11.481473] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:43.083 [2024-12-09 03:57:11.548546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:43.083 [2024-12-09 03:57:11.603105] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:43.083 [2024-12-09 03:57:11.603156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:43.083 [2024-12-09 03:57:11.603179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:43.083 [2024-12-09 03:57:11.603190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:43.083 [2024-12-09 03:57:11.603199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:43.083 [2024-12-09 03:57:11.604650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.083 [2024-12-09 03:57:11.604704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.083 [2024-12-09 03:57:11.604708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.341 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.341 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:43.341 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:43.341 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:43.341 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:43.341 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:43.341 03:57:11 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:43.599 [2024-12-09 03:57:11.987668] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.599 03:57:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:43.857 03:57:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:43.857 03:57:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:44.115 03:57:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:44.115 03:57:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:44.373 03:57:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:44.632 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=4d6b20f9-7b12-4320-bc2f-c3586e6ecf71 00:06:44.632 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4d6b20f9-7b12-4320-bc2f-c3586e6ecf71 lvol 20 00:06:44.890 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=12a87274-9a9a-48e6-87f8-5d638cc27fc0 00:06:44.890 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:45.148 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 12a87274-9a9a-48e6-87f8-5d638cc27fc0 00:06:45.406 03:57:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:45.664 [2024-12-09 03:57:14.217250] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:45.664 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:46.236 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=132509 00:06:46.236 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:46.236 03:57:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:47.171 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 12a87274-9a9a-48e6-87f8-5d638cc27fc0 MY_SNAPSHOT 00:06:47.429 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f91135f0-b5b3-4211-8884-552842fc46e0 00:06:47.429 03:57:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 12a87274-9a9a-48e6-87f8-5d638cc27fc0 30 00:06:47.687 03:57:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f91135f0-b5b3-4211-8884-552842fc46e0 MY_CLONE 00:06:47.945 03:57:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=632feaf9-dfbe-424f-921e-b9c4176bd00c 00:06:47.945 03:57:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 632feaf9-dfbe-424f-921e-b9c4176bd00c 00:06:48.878 03:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 132509 00:06:56.982 Initializing NVMe Controllers 00:06:56.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:56.982 Controller IO queue size 128, less than required. 00:06:56.982 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:56.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:56.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:56.982 Initialization complete. Launching workers. 00:06:56.982 ======================================================== 00:06:56.982 Latency(us) 00:06:56.982 Device Information : IOPS MiB/s Average min max 00:06:56.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10554.70 41.23 12136.19 488.87 86009.03 00:06:56.982 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10319.10 40.31 12409.19 2326.38 55050.34 00:06:56.982 ======================================================== 00:06:56.982 Total : 20873.80 81.54 12271.15 488.87 86009.03 00:06:56.982 00:06:56.982 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:56.982 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 12a87274-9a9a-48e6-87f8-5d638cc27fc0 00:06:57.240 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4d6b20f9-7b12-4320-bc2f-c3586e6ecf71 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:57.498 rmmod nvme_tcp 00:06:57.498 rmmod nvme_fabrics 00:06:57.498 rmmod nvme_keyring 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 132188 ']' 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 132188 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 132188 ']' 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 132188 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 132188 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 132188' 00:06:57.498 killing process with pid 132188 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 132188 00:06:57.498 03:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 132188 00:06:57.758 03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:57.758 03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:57.758 03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:57.758 03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:06:57.758 03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:06:57.758 03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:57.758 03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:06:57.758 03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:57.758 03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:57.758 03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.758 03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.758 03:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:00.293 00:07:00.293 real 0m19.251s 00:07:00.293 user 1m5.940s 00:07:00.293 sys 0m5.407s 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:00.293 ************************************ 00:07:00.293 END TEST nvmf_lvol 00:07:00.293 ************************************ 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:00.293 ************************************ 00:07:00.293 START TEST nvmf_lvs_grow 00:07:00.293 ************************************ 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:00.293 * Looking for test storage... 00:07:00.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:00.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.293 --rc genhtml_branch_coverage=1 00:07:00.293 --rc genhtml_function_coverage=1 00:07:00.293 --rc genhtml_legend=1 00:07:00.293 --rc geninfo_all_blocks=1 00:07:00.293 --rc geninfo_unexecuted_blocks=1 00:07:00.293 00:07:00.293 ' 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:00.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.293 --rc genhtml_branch_coverage=1 00:07:00.293 --rc genhtml_function_coverage=1 00:07:00.293 --rc genhtml_legend=1 00:07:00.293 --rc geninfo_all_blocks=1 00:07:00.293 --rc geninfo_unexecuted_blocks=1 00:07:00.293 00:07:00.293 ' 00:07:00.293 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:00.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.293 --rc genhtml_branch_coverage=1 00:07:00.293 --rc genhtml_function_coverage=1 00:07:00.293 --rc genhtml_legend=1 00:07:00.293 --rc geninfo_all_blocks=1 00:07:00.294 --rc geninfo_unexecuted_blocks=1 00:07:00.294 00:07:00.294 ' 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:00.294 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.294 --rc genhtml_branch_coverage=1 00:07:00.294 --rc genhtml_function_coverage=1 00:07:00.294 --rc genhtml_legend=1 00:07:00.294 --rc geninfo_all_blocks=1 00:07:00.294 --rc geninfo_unexecuted_blocks=1 00:07:00.294 00:07:00.294 ' 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:00.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:00.294 03:57:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:02.198 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:02.198 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:02.199 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:02.199 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:02.199 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:02.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:02.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:07:02.199 00:07:02.199 --- 10.0.0.2 ping statistics --- 00:07:02.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.199 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:02.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:02.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:07:02.199 00:07:02.199 --- 10.0.0.1 ping statistics --- 00:07:02.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:02.199 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:02.199 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:02.457 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:02.457 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:02.457 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:02.457 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:02.457 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=135901 00:07:02.457 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:02.457 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 135901 00:07:02.457 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 135901 ']' 00:07:02.457 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.457 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.457 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.457 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.457 03:57:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:02.457 [2024-12-09 03:57:30.834915] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:07:02.457 [2024-12-09 03:57:30.835008] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.457 [2024-12-09 03:57:30.911769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.457 [2024-12-09 03:57:30.965191] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:02.457 [2024-12-09 03:57:30.965251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:02.457 [2024-12-09 03:57:30.965281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:02.457 [2024-12-09 03:57:30.965294] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:02.457 [2024-12-09 03:57:30.965303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:02.457 [2024-12-09 03:57:30.965882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.715 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.715 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:02.715 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:02.715 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:02.715 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:02.715 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:02.715 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:02.973 [2024-12-09 03:57:31.343769] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:02.973 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:02.973 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.973 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.973 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:02.973 ************************************ 00:07:02.973 START TEST lvs_grow_clean 00:07:02.973 ************************************ 00:07:02.973 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:02.973 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:02.973 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:02.973 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:02.973 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:02.973 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:02.973 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:02.973 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:02.973 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:02.973 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:03.230 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:03.230 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:03.488 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=34ecfbcc-a21c-49f9-918d-7098a215138a 00:07:03.488 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34ecfbcc-a21c-49f9-918d-7098a215138a 00:07:03.488 03:57:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:03.746 03:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:03.746 03:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:03.746 03:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 34ecfbcc-a21c-49f9-918d-7098a215138a lvol 150 00:07:04.004 03:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=498e530e-fad8-4e8d-b470-6d6b8c5c5f70 00:07:04.004 03:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:04.004 03:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:04.261 [2024-12-09 03:57:32.772719] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:04.261 [2024-12-09 03:57:32.772814] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:04.261 true 00:07:04.261 03:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34ecfbcc-a21c-49f9-918d-7098a215138a 00:07:04.261 03:57:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:04.518 03:57:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:04.518 03:57:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:04.775 03:57:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 498e530e-fad8-4e8d-b470-6d6b8c5c5f70 00:07:05.033 03:57:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:05.291 [2024-12-09 03:57:33.843982] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.291 03:57:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:05.856 03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=136335 00:07:05.856 03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:05.856 03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:05.856 03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 136335 /var/tmp/bdevperf.sock 00:07:05.856 03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 136335 ']' 00:07:05.856 03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:05.856 03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.856 03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:05.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:05.856 03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.856 03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:05.856 [2024-12-09 03:57:34.169937] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:07:05.856 [2024-12-09 03:57:34.170011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136335 ] 00:07:05.856 [2024-12-09 03:57:34.239243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.856 [2024-12-09 03:57:34.298076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.856 03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.856 03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:05.856 03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:06.421 Nvme0n1 00:07:06.421 03:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:06.683 [ 00:07:06.683 { 00:07:06.683 "name": "Nvme0n1", 00:07:06.683 "aliases": [ 00:07:06.683 "498e530e-fad8-4e8d-b470-6d6b8c5c5f70" 00:07:06.683 ], 00:07:06.683 "product_name": "NVMe disk", 00:07:06.683 "block_size": 4096, 00:07:06.683 "num_blocks": 38912, 00:07:06.683 "uuid": "498e530e-fad8-4e8d-b470-6d6b8c5c5f70", 00:07:06.683 "numa_id": 0, 00:07:06.683 "assigned_rate_limits": { 00:07:06.683 "rw_ios_per_sec": 0, 00:07:06.683 "rw_mbytes_per_sec": 0, 00:07:06.683 "r_mbytes_per_sec": 0, 00:07:06.683 "w_mbytes_per_sec": 0 00:07:06.683 }, 00:07:06.683 "claimed": false, 00:07:06.683 "zoned": false, 00:07:06.683 "supported_io_types": { 00:07:06.683 "read": true, 00:07:06.683 "write": true, 00:07:06.683 "unmap": true, 00:07:06.683 "flush": true, 00:07:06.683 "reset": true, 00:07:06.683 "nvme_admin": true, 00:07:06.683 "nvme_io": true, 00:07:06.683 "nvme_io_md": false, 00:07:06.683 "write_zeroes": true, 00:07:06.683 "zcopy": false, 00:07:06.683 "get_zone_info": false, 00:07:06.683 "zone_management": false, 00:07:06.683 "zone_append": false, 00:07:06.683 "compare": true, 00:07:06.683 "compare_and_write": true, 00:07:06.683 "abort": true, 00:07:06.683 "seek_hole": false, 00:07:06.683 "seek_data": false, 00:07:06.683 "copy": true, 00:07:06.683 "nvme_iov_md": false 00:07:06.683 }, 00:07:06.683 "memory_domains": [ 00:07:06.683 { 00:07:06.683 "dma_device_id": "system", 00:07:06.683 "dma_device_type": 1 00:07:06.683 } 00:07:06.683 ], 00:07:06.683 "driver_specific": { 00:07:06.683 "nvme": [ 00:07:06.683 { 00:07:06.683 "trid": { 00:07:06.683 "trtype": "TCP", 00:07:06.683 "adrfam": "IPv4", 00:07:06.683 "traddr": "10.0.0.2", 00:07:06.683 "trsvcid": "4420", 00:07:06.683 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:06.683 }, 00:07:06.683 "ctrlr_data": { 00:07:06.683 "cntlid": 1, 00:07:06.683 "vendor_id": "0x8086", 00:07:06.683 "model_number": "SPDK bdev Controller", 00:07:06.683 "serial_number": "SPDK0", 00:07:06.683 "firmware_revision": "25.01", 00:07:06.683 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:06.683 "oacs": { 00:07:06.683 "security": 0, 00:07:06.683 "format": 0, 00:07:06.683 "firmware": 0, 00:07:06.683 "ns_manage": 0 00:07:06.683 }, 00:07:06.683 "multi_ctrlr": true, 00:07:06.683 "ana_reporting": false 00:07:06.683 }, 00:07:06.683 "vs": { 00:07:06.683 "nvme_version": "1.3" 00:07:06.683 }, 00:07:06.683 "ns_data": { 00:07:06.683 "id": 1, 00:07:06.683 "can_share": true 00:07:06.683 } 00:07:06.683 } 00:07:06.683 ], 00:07:06.683 "mp_policy": "active_passive" 00:07:06.683 } 00:07:06.683 } 00:07:06.683 ] 00:07:06.683 03:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=136473 00:07:06.683 03:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:06.683 03:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:06.942 Running I/O for 10 seconds... 00:07:07.874 Latency(us) 00:07:07.874 [2024-12-09T02:57:36.450Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:07.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.874 Nvme0n1 : 1.00 13686.00 53.46 0.00 0.00 0.00 0.00 0.00 00:07:07.874 [2024-12-09T02:57:36.450Z] =================================================================================================================== 00:07:07.874 [2024-12-09T02:57:36.450Z] Total : 13686.00 53.46 0.00 0.00 0.00 0.00 0.00 00:07:07.874 00:07:08.806 03:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 34ecfbcc-a21c-49f9-918d-7098a215138a 00:07:08.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.806 Nvme0n1 : 2.00 13739.00 53.67 0.00 0.00 0.00 0.00 0.00 00:07:08.806 [2024-12-09T02:57:37.382Z] =================================================================================================================== 00:07:08.806 [2024-12-09T02:57:37.382Z] Total : 13739.00 53.67 0.00 0.00 0.00 0.00 0.00 00:07:08.806 00:07:09.066 true 00:07:09.066 03:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34ecfbcc-a21c-49f9-918d-7098a215138a 00:07:09.066 03:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:09.325 03:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:09.325 03:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:09.325 03:57:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 136473 00:07:09.894 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:09.894 Nvme0n1 : 3.00 13802.00 53.91 0.00 0.00 0.00 0.00 0.00 00:07:09.894 [2024-12-09T02:57:38.470Z] =================================================================================================================== 00:07:09.894 [2024-12-09T02:57:38.470Z] Total : 13802.00 53.91 0.00 0.00 0.00 0.00 0.00 00:07:09.894 00:07:10.829 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:10.829 Nvme0n1 : 4.00 13891.50 54.26 0.00 0.00 0.00 0.00 0.00 00:07:10.829 [2024-12-09T02:57:39.405Z] =================================================================================================================== 00:07:10.829 [2024-12-09T02:57:39.405Z] Total : 13891.50 54.26 0.00 0.00 0.00 0.00 0.00 00:07:10.829 00:07:11.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:11.763 Nvme0n1 : 5.00 13935.60 54.44 0.00 0.00 0.00 0.00 0.00 00:07:11.763 [2024-12-09T02:57:40.339Z] =================================================================================================================== 00:07:11.763 [2024-12-09T02:57:40.339Z] Total : 13935.60 54.44 0.00 0.00 0.00 0.00 0.00 00:07:11.763 00:07:13.139 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:13.139 Nvme0n1 : 6.00 13951.67 54.50 0.00 0.00 0.00 0.00 0.00 00:07:13.139 [2024-12-09T02:57:41.715Z] =================================================================================================================== 00:07:13.139 [2024-12-09T02:57:41.715Z] Total : 13951.67 54.50 0.00 0.00 0.00 0.00 0.00 00:07:13.139 00:07:14.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:14.073 Nvme0n1 : 7.00 13984.86 54.63 0.00 0.00 0.00 0.00 0.00 00:07:14.073 [2024-12-09T02:57:42.649Z] =================================================================================================================== 00:07:14.073 [2024-12-09T02:57:42.649Z] Total : 13984.86 54.63 0.00 0.00 0.00 0.00 0.00 00:07:14.073 00:07:15.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.010 Nvme0n1 : 8.00 14020.75 54.77 0.00 0.00 0.00 0.00 0.00 00:07:15.010 [2024-12-09T02:57:43.586Z] =================================================================================================================== 00:07:15.010 [2024-12-09T02:57:43.586Z] Total : 14020.75 54.77 0.00 0.00 0.00 0.00 0.00 00:07:15.010 00:07:15.945 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.945 Nvme0n1 : 9.00 14043.33 54.86 0.00 0.00 0.00 0.00 0.00 00:07:15.945 [2024-12-09T02:57:44.521Z] =================================================================================================================== 00:07:15.945 [2024-12-09T02:57:44.521Z] Total : 14043.33 54.86 0.00 0.00 0.00 0.00 0.00 00:07:15.945 00:07:16.897 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.897 Nvme0n1 : 10.00 14051.80 54.89 0.00 0.00 0.00 0.00 0.00 00:07:16.897 [2024-12-09T02:57:45.473Z] =================================================================================================================== 00:07:16.897 [2024-12-09T02:57:45.473Z] Total : 14051.80 54.89 0.00 0.00 0.00 0.00 0.00 00:07:16.897 00:07:16.897 00:07:16.897 Latency(us) 00:07:16.897 [2024-12-09T02:57:45.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.897 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.897 Nvme0n1 : 10.01 14051.98 54.89 0.00 0.00 9100.82 6359.42 15825.73 00:07:16.897 [2024-12-09T02:57:45.473Z] =================================================================================================================== 00:07:16.897 [2024-12-09T02:57:45.473Z] Total : 14051.98 54.89 0.00 0.00 9100.82 6359.42 15825.73 00:07:16.897 { 00:07:16.897 "results": [ 00:07:16.897 { 00:07:16.897 "job": "Nvme0n1", 00:07:16.897 "core_mask": "0x2", 00:07:16.897 "workload": "randwrite", 00:07:16.897 "status": "finished", 00:07:16.897 "queue_depth": 128, 00:07:16.897 "io_size": 4096, 00:07:16.897 "runtime": 10.008413, 00:07:16.897 "iops": 14051.978070848994, 00:07:16.897 "mibps": 54.890539339253884, 00:07:16.897 "io_failed": 0, 00:07:16.897 "io_timeout": 0, 00:07:16.897 "avg_latency_us": 9100.820798340685, 00:07:16.897 "min_latency_us": 6359.419259259259, 00:07:16.897 "max_latency_us": 15825.730370370371 00:07:16.897 } 00:07:16.897 ], 00:07:16.897 "core_count": 1 00:07:16.897 } 00:07:16.897 03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 136335 00:07:16.897 03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 136335 ']' 00:07:16.897 03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 136335 00:07:16.897 03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:16.897 03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.897 03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 136335 00:07:16.897 03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:16.897 03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:16.897 03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 136335' 00:07:16.897 killing process with pid 136335 00:07:16.897 03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 136335 00:07:16.897 Received shutdown signal, test time was about 10.000000 seconds 00:07:16.897 00:07:16.897 Latency(us) 00:07:16.897 [2024-12-09T02:57:45.473Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.897 [2024-12-09T02:57:45.473Z] =================================================================================================================== 00:07:16.897 [2024-12-09T02:57:45.473Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:16.897 03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 136335 00:07:17.155 03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:17.413 03:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:17.671 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34ecfbcc-a21c-49f9-918d-7098a215138a 00:07:17.671 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:17.929 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:17.929 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:17.929 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:18.187 [2024-12-09 03:57:46.655188] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:18.187 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34ecfbcc-a21c-49f9-918d-7098a215138a 00:07:18.187 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:18.187 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34ecfbcc-a21c-49f9-918d-7098a215138a 00:07:18.187 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:18.187 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.187 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:18.187 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.187 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:18.187 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:18.187 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:18.187 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:18.187 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34ecfbcc-a21c-49f9-918d-7098a215138a 00:07:18.445 request: 00:07:18.445 { 00:07:18.445 "uuid": "34ecfbcc-a21c-49f9-918d-7098a215138a", 00:07:18.445 "method": "bdev_lvol_get_lvstores", 00:07:18.445 "req_id": 1 00:07:18.445 } 00:07:18.445 Got JSON-RPC error response 00:07:18.445 response: 00:07:18.445 { 00:07:18.445 "code": -19, 00:07:18.445 "message": "No such device" 00:07:18.445 } 00:07:18.445 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:18.445 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:18.445 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:18.445 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:18.445 03:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:18.703 aio_bdev 00:07:18.703 03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 498e530e-fad8-4e8d-b470-6d6b8c5c5f70 00:07:18.703 03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=498e530e-fad8-4e8d-b470-6d6b8c5c5f70 00:07:18.703 03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:18.703 03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:18.703 03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:18.703 03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:18.703 03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:18.961 03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 498e530e-fad8-4e8d-b470-6d6b8c5c5f70 -t 2000 00:07:19.220 [ 00:07:19.220 { 00:07:19.220 "name": "498e530e-fad8-4e8d-b470-6d6b8c5c5f70", 00:07:19.220 "aliases": [ 00:07:19.220 "lvs/lvol" 00:07:19.220 ], 00:07:19.220 "product_name": "Logical Volume", 00:07:19.220 "block_size": 4096, 00:07:19.220 "num_blocks": 38912, 00:07:19.220 "uuid": "498e530e-fad8-4e8d-b470-6d6b8c5c5f70", 00:07:19.220 "assigned_rate_limits": { 00:07:19.220 "rw_ios_per_sec": 0, 00:07:19.220 "rw_mbytes_per_sec": 0, 00:07:19.220 "r_mbytes_per_sec": 0, 00:07:19.220 "w_mbytes_per_sec": 0 00:07:19.220 }, 00:07:19.220 "claimed": false, 00:07:19.220 "zoned": false, 00:07:19.220 "supported_io_types": { 00:07:19.220 "read": true, 00:07:19.220 "write": true, 00:07:19.220 "unmap": true, 00:07:19.220 "flush": false, 00:07:19.220 "reset": true, 00:07:19.220 "nvme_admin": false, 00:07:19.220 "nvme_io": false, 00:07:19.220 "nvme_io_md": false, 00:07:19.220 "write_zeroes": true, 00:07:19.220 "zcopy": false, 00:07:19.220 "get_zone_info": false, 00:07:19.220 "zone_management": false, 00:07:19.220 "zone_append": false, 00:07:19.220 "compare": false, 00:07:19.220 "compare_and_write": false, 00:07:19.220 "abort": false, 00:07:19.220 "seek_hole": true, 00:07:19.220 "seek_data": true, 00:07:19.220 "copy": false, 00:07:19.220 "nvme_iov_md": false 00:07:19.220 }, 00:07:19.220 "driver_specific": { 00:07:19.220 "lvol": { 00:07:19.220 "lvol_store_uuid": "34ecfbcc-a21c-49f9-918d-7098a215138a", 00:07:19.220 "base_bdev": "aio_bdev", 00:07:19.220 "thin_provision": false, 00:07:19.220 "num_allocated_clusters": 38, 00:07:19.220 "snapshot": false, 00:07:19.220 "clone": false, 00:07:19.220 "esnap_clone": false 00:07:19.220 } 00:07:19.220 } 00:07:19.220 } 00:07:19.220 ] 00:07:19.220 03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:19.220 03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34ecfbcc-a21c-49f9-918d-7098a215138a 00:07:19.220 03:57:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:19.477 03:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:19.733 03:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 34ecfbcc-a21c-49f9-918d-7098a215138a 00:07:19.733 03:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:19.990 03:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:19.990 03:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 498e530e-fad8-4e8d-b470-6d6b8c5c5f70 00:07:20.247 03:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 34ecfbcc-a21c-49f9-918d-7098a215138a 00:07:20.505 03:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:20.763 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:20.763 00:07:20.763 real 0m17.788s 00:07:20.763 user 0m17.255s 00:07:20.763 sys 0m1.897s 00:07:20.763 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.763 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:20.763 ************************************ 00:07:20.763 END TEST lvs_grow_clean 00:07:20.763 ************************************ 00:07:20.763 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:20.763 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:20.763 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.763 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:20.763 ************************************ 00:07:20.763 START TEST lvs_grow_dirty 00:07:20.763 ************************************ 00:07:20.763 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:20.763 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:20.763 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:20.763 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:20.763 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:20.763 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:20.763 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:20.763 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:20.763 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:20.763 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:21.021 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:21.021 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:21.278 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c7d1e89a-5807-4f63-bddd-953bfeeb50f6 00:07:21.278 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6 00:07:21.278 03:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:21.535 03:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:21.535 03:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:21.535 03:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6 lvol 150 00:07:21.792 03:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1b740548-bc67-4906-8bb3-da9947314eed 00:07:21.793 03:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:21.793 03:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:22.051 [2024-12-09 03:57:50.613859] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:22.051 [2024-12-09 03:57:50.613955] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:22.051 true 00:07:22.309 03:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6 00:07:22.309 03:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:22.567 03:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:22.567 03:57:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:22.825 03:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1b740548-bc67-4906-8bb3-da9947314eed 00:07:23.096 03:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:23.355 [2024-12-09 03:57:51.701127] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:23.355 03:57:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:23.614 03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=138531 00:07:23.614 03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:23.614 03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 138531 /var/tmp/bdevperf.sock 00:07:23.614 03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 138531 ']' 00:07:23.614 03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:23.614 03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:23.614 03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.614 03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:23.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:23.614 03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.614 03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:23.614 [2024-12-09 03:57:52.075224] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:07:23.614 [2024-12-09 03:57:52.075335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid138531 ] 00:07:23.614 [2024-12-09 03:57:52.141362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.872 [2024-12-09 03:57:52.198353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.872 03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:23.872 03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:23.872 03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:24.130 Nvme0n1 00:07:24.130 03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:24.388 [ 00:07:24.388 { 00:07:24.388 "name": "Nvme0n1", 00:07:24.388 "aliases": [ 00:07:24.388 "1b740548-bc67-4906-8bb3-da9947314eed" 00:07:24.388 ], 00:07:24.388 "product_name": "NVMe disk", 00:07:24.388 "block_size": 4096, 00:07:24.388 "num_blocks": 38912, 00:07:24.388 "uuid": "1b740548-bc67-4906-8bb3-da9947314eed", 00:07:24.388 "numa_id": 0, 00:07:24.388 "assigned_rate_limits": { 00:07:24.388 "rw_ios_per_sec": 0, 00:07:24.388 "rw_mbytes_per_sec": 0, 00:07:24.388 "r_mbytes_per_sec": 0, 00:07:24.388 "w_mbytes_per_sec": 0 00:07:24.388 }, 00:07:24.388 "claimed": false, 00:07:24.388 "zoned": false, 00:07:24.388 "supported_io_types": { 00:07:24.388 "read": true, 00:07:24.388 "write": true, 00:07:24.388 "unmap": true, 00:07:24.388 "flush": true, 00:07:24.388 "reset": true, 00:07:24.388 "nvme_admin": true, 00:07:24.388 "nvme_io": true, 00:07:24.388 "nvme_io_md": false, 00:07:24.388 "write_zeroes": true, 00:07:24.388 "zcopy": false, 00:07:24.388 "get_zone_info": false, 00:07:24.388 "zone_management": false, 00:07:24.388 "zone_append": false, 00:07:24.388 "compare": true, 00:07:24.388 "compare_and_write": true, 00:07:24.388 "abort": true, 00:07:24.388 "seek_hole": false, 00:07:24.388 "seek_data": false, 00:07:24.388 "copy": true, 00:07:24.388 "nvme_iov_md": false 00:07:24.388 }, 00:07:24.388 "memory_domains": [ 00:07:24.388 { 00:07:24.388 "dma_device_id": "system", 00:07:24.388 "dma_device_type": 1 00:07:24.388 } 00:07:24.388 ], 00:07:24.388 "driver_specific": { 00:07:24.388 "nvme": [ 00:07:24.388 { 00:07:24.388 "trid": { 00:07:24.388 "trtype": "TCP", 00:07:24.388 "adrfam": "IPv4", 00:07:24.388 "traddr": "10.0.0.2", 00:07:24.388 "trsvcid": "4420", 00:07:24.388 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:24.388 }, 00:07:24.388 "ctrlr_data": { 00:07:24.388 "cntlid": 1, 00:07:24.388 "vendor_id": "0x8086", 00:07:24.388 "model_number": "SPDK bdev Controller", 00:07:24.388 "serial_number": "SPDK0", 00:07:24.388 "firmware_revision": "25.01", 00:07:24.388 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:24.388 "oacs": { 00:07:24.388 "security": 0, 00:07:24.388 "format": 0, 00:07:24.388 "firmware": 0, 00:07:24.388 "ns_manage": 0 00:07:24.388 }, 00:07:24.388 "multi_ctrlr": true, 00:07:24.388 "ana_reporting": false 00:07:24.388 }, 00:07:24.388 "vs": { 00:07:24.388 "nvme_version": "1.3" 00:07:24.388 }, 00:07:24.389 "ns_data": { 00:07:24.389 "id": 1, 00:07:24.389 "can_share": true 00:07:24.389 } 00:07:24.389 } 00:07:24.389 ], 00:07:24.389 "mp_policy": "active_passive" 00:07:24.389 } 00:07:24.389 } 00:07:24.389 ] 00:07:24.389 03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=138550 00:07:24.389 03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:24.389 03:57:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:24.646 Running I/O for 10 seconds... 00:07:25.580 Latency(us) 00:07:25.580 [2024-12-09T02:57:54.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.580 Nvme0n1 : 1.00 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:07:25.580 [2024-12-09T02:57:54.156Z] =================================================================================================================== 00:07:25.580 [2024-12-09T02:57:54.156Z] Total : 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:07:25.580 00:07:26.513 03:57:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6 00:07:26.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:26.513 Nvme0n1 : 2.00 15177.00 59.29 0.00 0.00 0.00 0.00 0.00 00:07:26.513 [2024-12-09T02:57:55.089Z] =================================================================================================================== 00:07:26.513 [2024-12-09T02:57:55.089Z] Total : 15177.00 59.29 0.00 0.00 0.00 0.00 0.00 00:07:26.513 00:07:26.776 true 00:07:26.776 03:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6 00:07:26.776 03:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:27.034 03:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:27.034 03:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:27.034 03:57:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 138550 00:07:27.599 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:27.599 Nvme0n1 : 3.00 15261.67 59.62 0.00 0.00 0.00 0.00 0.00 00:07:27.599 [2024-12-09T02:57:56.175Z] =================================================================================================================== 00:07:27.599 [2024-12-09T02:57:56.175Z] Total : 15261.67 59.62 0.00 0.00 0.00 0.00 0.00 00:07:27.599 00:07:28.531 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:28.531 Nvme0n1 : 4.00 15367.50 60.03 0.00 0.00 0.00 0.00 0.00 00:07:28.531 [2024-12-09T02:57:57.107Z] =================================================================================================================== 00:07:28.531 [2024-12-09T02:57:57.107Z] Total : 15367.50 60.03 0.00 0.00 0.00 0.00 0.00 00:07:28.531 00:07:29.906 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:29.906 Nvme0n1 : 5.00 15431.20 60.28 0.00 0.00 0.00 0.00 0.00 00:07:29.906 [2024-12-09T02:57:58.482Z] =================================================================================================================== 00:07:29.906 [2024-12-09T02:57:58.482Z] Total : 15431.20 60.28 0.00 0.00 0.00 0.00 0.00 00:07:29.906 00:07:30.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:30.842 Nvme0n1 : 6.00 15494.83 60.53 0.00 0.00 0.00 0.00 0.00 00:07:30.842 [2024-12-09T02:57:59.418Z] =================================================================================================================== 00:07:30.842 [2024-12-09T02:57:59.418Z] Total : 15494.83 60.53 0.00 0.00 0.00 0.00 0.00 00:07:30.842 00:07:31.776 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:31.776 Nvme0n1 : 7.00 15535.86 60.69 0.00 0.00 0.00 0.00 0.00 00:07:31.776 [2024-12-09T02:58:00.352Z] =================================================================================================================== 00:07:31.776 [2024-12-09T02:58:00.352Z] Total : 15535.86 60.69 0.00 0.00 0.00 0.00 0.00 00:07:31.776 00:07:32.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:32.710 Nvme0n1 : 8.00 15562.38 60.79 0.00 0.00 0.00 0.00 0.00 00:07:32.710 [2024-12-09T02:58:01.286Z] =================================================================================================================== 00:07:32.710 [2024-12-09T02:58:01.286Z] Total : 15562.38 60.79 0.00 0.00 0.00 0.00 0.00 00:07:32.710 00:07:33.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.647 Nvme0n1 : 9.00 15597.11 60.93 0.00 0.00 0.00 0.00 0.00 00:07:33.647 [2024-12-09T02:58:02.223Z] =================================================================================================================== 00:07:33.647 [2024-12-09T02:58:02.223Z] Total : 15597.11 60.93 0.00 0.00 0.00 0.00 0.00 00:07:33.647 00:07:34.583 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.583 Nvme0n1 : 10.00 15631.30 61.06 0.00 0.00 0.00 0.00 0.00 00:07:34.583 [2024-12-09T02:58:03.159Z] =================================================================================================================== 00:07:34.583 [2024-12-09T02:58:03.159Z] Total : 15631.30 61.06 0.00 0.00 0.00 0.00 0.00 00:07:34.583 00:07:34.583 00:07:34.583 Latency(us) 00:07:34.583 [2024-12-09T02:58:03.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.584 Nvme0n1 : 10.00 15630.48 61.06 0.00 0.00 8183.93 4247.70 17573.36 00:07:34.584 [2024-12-09T02:58:03.160Z] =================================================================================================================== 00:07:34.584 [2024-12-09T02:58:03.160Z] Total : 15630.48 61.06 0.00 0.00 8183.93 4247.70 17573.36 00:07:34.584 { 00:07:34.584 "results": [ 00:07:34.584 { 00:07:34.584 "job": "Nvme0n1", 00:07:34.584 "core_mask": "0x2", 00:07:34.584 "workload": "randwrite", 00:07:34.584 "status": "finished", 00:07:34.584 "queue_depth": 128, 00:07:34.584 "io_size": 4096, 00:07:34.584 "runtime": 10.004619, 00:07:34.584 "iops": 15630.480281158134, 00:07:34.584 "mibps": 61.05656359827396, 00:07:34.584 "io_failed": 0, 00:07:34.584 "io_timeout": 0, 00:07:34.584 "avg_latency_us": 8183.929029631383, 00:07:34.584 "min_latency_us": 4247.7037037037035, 00:07:34.584 "max_latency_us": 17573.357037037036 00:07:34.584 } 00:07:34.584 ], 00:07:34.584 "core_count": 1 00:07:34.584 } 00:07:34.584 03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 138531 00:07:34.584 03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 138531 ']' 00:07:34.584 03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 138531 00:07:34.584 03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:34.584 03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.584 03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 138531 00:07:34.584 03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:34.584 03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:34.584 03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 138531' 00:07:34.584 killing process with pid 138531 00:07:34.584 03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 138531 00:07:34.584 Received shutdown signal, test time was about 10.000000 seconds 00:07:34.584 00:07:34.584 Latency(us) 00:07:34.584 [2024-12-09T02:58:03.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.584 [2024-12-09T02:58:03.160Z] =================================================================================================================== 00:07:34.584 [2024-12-09T02:58:03.160Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:34.584 03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 138531 00:07:34.843 03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:35.102 03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:35.361 03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6 00:07:35.361 03:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:35.621 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:35.621 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:35.621 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 135901 00:07:35.621 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 135901 00:07:35.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 135901 Killed "${NVMF_APP[@]}" "$@" 00:07:35.621 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:35.621 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:35.621 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:35.621 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:35.621 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:35.621 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=139889 00:07:35.621 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:35.621 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 139889 00:07:35.621 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 139889 ']' 00:07:35.621 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.621 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.621 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.621 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.621 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:35.880 [2024-12-09 03:58:04.246303] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:07:35.880 [2024-12-09 03:58:04.246376] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.880 [2024-12-09 03:58:04.314165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.880 [2024-12-09 03:58:04.368815] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.880 [2024-12-09 03:58:04.368877] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.880 [2024-12-09 03:58:04.368900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.880 [2024-12-09 03:58:04.368910] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.880 [2024-12-09 03:58:04.368919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.880 [2024-12-09 03:58:04.369516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.139 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.139 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:36.139 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:36.139 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:36.139 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:36.139 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.139 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:36.398 [2024-12-09 03:58:04.753558] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:36.398 [2024-12-09 03:58:04.753682] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:36.398 [2024-12-09 03:58:04.753727] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:36.398 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:36.398 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1b740548-bc67-4906-8bb3-da9947314eed 00:07:36.398 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1b740548-bc67-4906-8bb3-da9947314eed 00:07:36.398 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:36.398 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:36.398 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:36.398 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:36.398 03:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:36.657 03:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1b740548-bc67-4906-8bb3-da9947314eed -t 2000 00:07:36.914 [ 00:07:36.914 { 00:07:36.914 "name": "1b740548-bc67-4906-8bb3-da9947314eed", 00:07:36.914 "aliases": [ 00:07:36.914 "lvs/lvol" 00:07:36.914 ], 00:07:36.914 "product_name": "Logical Volume", 00:07:36.914 "block_size": 4096, 00:07:36.914 "num_blocks": 38912, 00:07:36.914 "uuid": "1b740548-bc67-4906-8bb3-da9947314eed", 00:07:36.914 "assigned_rate_limits": { 00:07:36.914 "rw_ios_per_sec": 0, 00:07:36.914 "rw_mbytes_per_sec": 0, 00:07:36.914 "r_mbytes_per_sec": 0, 00:07:36.914 "w_mbytes_per_sec": 0 00:07:36.914 }, 00:07:36.914 "claimed": false, 00:07:36.914 "zoned": false, 00:07:36.914 "supported_io_types": { 00:07:36.914 "read": true, 00:07:36.914 "write": true, 00:07:36.914 "unmap": true, 00:07:36.914 "flush": false, 00:07:36.914 "reset": true, 00:07:36.914 "nvme_admin": false, 00:07:36.914 "nvme_io": false, 00:07:36.914 "nvme_io_md": false, 00:07:36.914 "write_zeroes": true, 00:07:36.914 "zcopy": false, 00:07:36.914 "get_zone_info": false, 00:07:36.914 "zone_management": false, 00:07:36.914 "zone_append": false, 00:07:36.914 "compare": false, 00:07:36.914 "compare_and_write": false, 00:07:36.914 "abort": false, 00:07:36.914 "seek_hole": true, 00:07:36.914 "seek_data": true, 00:07:36.914 "copy": false, 00:07:36.914 "nvme_iov_md": false 00:07:36.914 }, 00:07:36.914 "driver_specific": { 00:07:36.914 "lvol": { 00:07:36.914 "lvol_store_uuid": "c7d1e89a-5807-4f63-bddd-953bfeeb50f6", 00:07:36.914 "base_bdev": "aio_bdev", 00:07:36.914 "thin_provision": false, 00:07:36.914 "num_allocated_clusters": 38, 00:07:36.914 "snapshot": false, 00:07:36.914 "clone": false, 00:07:36.914 "esnap_clone": false 00:07:36.914 } 00:07:36.914 } 00:07:36.914 } 00:07:36.914 ] 00:07:36.914 03:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:36.914 03:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6 00:07:36.914 03:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:37.172 03:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:37.172 03:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6 00:07:37.172 03:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:37.429 03:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:37.430 03:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:37.687 [2024-12-09 03:58:06.123418] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:37.687 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6 00:07:37.687 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:37.687 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6 00:07:37.687 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.687 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.687 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.687 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.687 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.687 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.687 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:37.687 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:37.687 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6 00:07:37.946 request: 00:07:37.947 { 00:07:37.947 "uuid": "c7d1e89a-5807-4f63-bddd-953bfeeb50f6", 00:07:37.947 "method": "bdev_lvol_get_lvstores", 00:07:37.947 "req_id": 1 00:07:37.947 } 00:07:37.947 Got JSON-RPC error response 00:07:37.947 response: 00:07:37.947 { 00:07:37.947 "code": -19, 00:07:37.947 "message": "No such device" 00:07:37.947 } 00:07:37.947 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:37.947 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.947 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:37.947 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.947 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:38.206 aio_bdev 00:07:38.206 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1b740548-bc67-4906-8bb3-da9947314eed 00:07:38.206 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1b740548-bc67-4906-8bb3-da9947314eed 00:07:38.206 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:38.206 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:38.206 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:38.206 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:38.206 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:38.465 03:58:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1b740548-bc67-4906-8bb3-da9947314eed -t 2000 00:07:38.724 [ 00:07:38.724 { 00:07:38.724 "name": "1b740548-bc67-4906-8bb3-da9947314eed", 00:07:38.724 "aliases": [ 00:07:38.724 "lvs/lvol" 00:07:38.724 ], 00:07:38.724 "product_name": "Logical Volume", 00:07:38.724 "block_size": 4096, 00:07:38.724 "num_blocks": 38912, 00:07:38.724 "uuid": "1b740548-bc67-4906-8bb3-da9947314eed", 00:07:38.724 "assigned_rate_limits": { 00:07:38.724 "rw_ios_per_sec": 0, 00:07:38.724 "rw_mbytes_per_sec": 0, 00:07:38.724 "r_mbytes_per_sec": 0, 00:07:38.724 "w_mbytes_per_sec": 0 00:07:38.724 }, 00:07:38.724 "claimed": false, 00:07:38.724 "zoned": false, 00:07:38.724 "supported_io_types": { 00:07:38.724 "read": true, 00:07:38.724 "write": true, 00:07:38.724 "unmap": true, 00:07:38.724 "flush": false, 00:07:38.724 "reset": true, 00:07:38.724 "nvme_admin": false, 00:07:38.724 "nvme_io": false, 00:07:38.724 "nvme_io_md": false, 00:07:38.724 "write_zeroes": true, 00:07:38.724 "zcopy": false, 00:07:38.724 "get_zone_info": false, 00:07:38.724 "zone_management": false, 00:07:38.724 "zone_append": false, 00:07:38.724 "compare": false, 00:07:38.724 "compare_and_write": false, 00:07:38.724 "abort": false, 00:07:38.724 "seek_hole": true, 00:07:38.724 "seek_data": true, 00:07:38.724 "copy": false, 00:07:38.724 "nvme_iov_md": false 00:07:38.724 }, 00:07:38.724 "driver_specific": { 00:07:38.724 "lvol": { 00:07:38.724 "lvol_store_uuid": "c7d1e89a-5807-4f63-bddd-953bfeeb50f6", 00:07:38.724 "base_bdev": "aio_bdev", 00:07:38.724 "thin_provision": false, 00:07:38.724 "num_allocated_clusters": 38, 00:07:38.724 "snapshot": false, 00:07:38.724 "clone": false, 00:07:38.724 "esnap_clone": false 00:07:38.724 } 00:07:38.724 } 00:07:38.724 } 00:07:38.724 ] 00:07:38.724 03:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:38.724 03:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6 00:07:38.724 03:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:38.982 03:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:38.982 03:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6 00:07:38.982 03:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:39.240 03:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:39.240 03:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1b740548-bc67-4906-8bb3-da9947314eed 00:07:39.498 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c7d1e89a-5807-4f63-bddd-953bfeeb50f6 00:07:40.064 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:40.064 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:40.064 00:07:40.064 real 0m19.398s 00:07:40.064 user 0m49.064s 00:07:40.064 sys 0m4.770s 00:07:40.064 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.064 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:40.064 ************************************ 00:07:40.064 END TEST lvs_grow_dirty 00:07:40.064 ************************************ 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:40.322 nvmf_trace.0 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:40.322 rmmod nvme_tcp 00:07:40.322 rmmod nvme_fabrics 00:07:40.322 rmmod nvme_keyring 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 139889 ']' 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 139889 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 139889 ']' 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 139889 00:07:40.322 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:40.323 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.323 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 139889 00:07:40.323 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.323 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.323 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 139889' 00:07:40.323 killing process with pid 139889 00:07:40.323 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 139889 00:07:40.323 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 139889 00:07:40.581 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:40.581 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:40.581 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:40.581 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:40.581 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:40.581 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:40.581 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:40.581 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:40.581 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:40.581 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.581 03:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.581 03:58:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.494 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:42.494 00:07:42.494 real 0m42.706s 00:07:42.494 user 1m12.364s 00:07:42.494 sys 0m8.659s 00:07:42.494 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.494 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:42.494 ************************************ 00:07:42.494 END TEST nvmf_lvs_grow 00:07:42.494 ************************************ 00:07:42.494 03:58:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:42.494 03:58:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.494 03:58:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.494 03:58:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:42.754 ************************************ 00:07:42.754 START TEST nvmf_bdev_io_wait 00:07:42.754 ************************************ 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:42.754 * Looking for test storage... 00:07:42.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:42.754 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:42.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.755 --rc genhtml_branch_coverage=1 00:07:42.755 --rc genhtml_function_coverage=1 00:07:42.755 --rc genhtml_legend=1 00:07:42.755 --rc geninfo_all_blocks=1 00:07:42.755 --rc geninfo_unexecuted_blocks=1 00:07:42.755 00:07:42.755 ' 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:42.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.755 --rc genhtml_branch_coverage=1 00:07:42.755 --rc genhtml_function_coverage=1 00:07:42.755 --rc genhtml_legend=1 00:07:42.755 --rc geninfo_all_blocks=1 00:07:42.755 --rc geninfo_unexecuted_blocks=1 00:07:42.755 00:07:42.755 ' 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:42.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.755 --rc genhtml_branch_coverage=1 00:07:42.755 --rc genhtml_function_coverage=1 00:07:42.755 --rc genhtml_legend=1 00:07:42.755 --rc geninfo_all_blocks=1 00:07:42.755 --rc geninfo_unexecuted_blocks=1 00:07:42.755 00:07:42.755 ' 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:42.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.755 --rc genhtml_branch_coverage=1 00:07:42.755 --rc genhtml_function_coverage=1 00:07:42.755 --rc genhtml_legend=1 00:07:42.755 --rc geninfo_all_blocks=1 00:07:42.755 --rc geninfo_unexecuted_blocks=1 00:07:42.755 00:07:42.755 ' 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:07:42.755 03:58:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:45.296 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:45.296 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:45.296 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:45.297 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:45.297 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:45.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:45.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:07:45.297 00:07:45.297 --- 10.0.0.2 ping statistics --- 00:07:45.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.297 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:45.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:45.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:07:45.297 00:07:45.297 --- 10.0.0.1 ping statistics --- 00:07:45.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:45.297 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=142545 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 142545 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 142545 ']' 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.297 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:45.297 [2024-12-09 03:58:13.715029] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:07:45.297 [2024-12-09 03:58:13.715113] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:45.297 [2024-12-09 03:58:13.787552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.297 [2024-12-09 03:58:13.850008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:45.297 [2024-12-09 03:58:13.850073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:45.297 [2024-12-09 03:58:13.850087] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:45.297 [2024-12-09 03:58:13.850098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:45.297 [2024-12-09 03:58:13.850111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:45.297 [2024-12-09 03:58:13.851766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.297 [2024-12-09 03:58:13.855291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.297 [2024-12-09 03:58:13.855370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.297 [2024-12-09 03:58:13.859302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.557 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.557 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:45.557 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:45.557 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:45.557 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:45.557 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.557 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:45.557 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.557 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:45.557 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.557 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:45.557 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.557 03:58:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:45.557 [2024-12-09 03:58:14.069481] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:45.557 Malloc0 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:45.557 [2024-12-09 03:58:14.122797] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=142577 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=142578 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=142581 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:45.557 { 00:07:45.557 "params": { 00:07:45.557 "name": "Nvme$subsystem", 00:07:45.557 "trtype": "$TEST_TRANSPORT", 00:07:45.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:45.557 "adrfam": "ipv4", 00:07:45.557 "trsvcid": "$NVMF_PORT", 00:07:45.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:45.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:45.557 "hdgst": ${hdgst:-false}, 00:07:45.557 "ddgst": ${ddgst:-false} 00:07:45.557 }, 00:07:45.557 "method": "bdev_nvme_attach_controller" 00:07:45.557 } 00:07:45.557 EOF 00:07:45.557 )") 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:45.557 { 00:07:45.557 "params": { 00:07:45.557 "name": "Nvme$subsystem", 00:07:45.557 "trtype": "$TEST_TRANSPORT", 00:07:45.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:45.557 "adrfam": "ipv4", 00:07:45.557 "trsvcid": "$NVMF_PORT", 00:07:45.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:45.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:45.557 "hdgst": ${hdgst:-false}, 00:07:45.557 "ddgst": ${ddgst:-false} 00:07:45.557 }, 00:07:45.557 "method": "bdev_nvme_attach_controller" 00:07:45.557 } 00:07:45.557 EOF 00:07:45.557 )") 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=142583 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:45.557 { 00:07:45.557 "params": { 00:07:45.557 "name": "Nvme$subsystem", 00:07:45.557 "trtype": "$TEST_TRANSPORT", 00:07:45.557 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:45.557 "adrfam": "ipv4", 00:07:45.557 "trsvcid": "$NVMF_PORT", 00:07:45.557 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:45.557 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:45.557 "hdgst": ${hdgst:-false}, 00:07:45.557 "ddgst": ${ddgst:-false} 00:07:45.557 }, 00:07:45.557 "method": "bdev_nvme_attach_controller" 00:07:45.557 } 00:07:45.557 EOF 00:07:45.557 )") 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:45.557 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:45.558 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:45.558 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:45.558 { 00:07:45.558 "params": { 00:07:45.558 "name": "Nvme$subsystem", 00:07:45.558 "trtype": "$TEST_TRANSPORT", 00:07:45.558 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:45.558 "adrfam": "ipv4", 00:07:45.558 "trsvcid": "$NVMF_PORT", 00:07:45.558 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:45.558 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:45.558 "hdgst": ${hdgst:-false}, 00:07:45.558 "ddgst": ${ddgst:-false} 00:07:45.558 }, 00:07:45.558 "method": "bdev_nvme_attach_controller" 00:07:45.558 } 00:07:45.558 EOF 00:07:45.558 )") 00:07:45.558 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:45.558 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 142577 00:07:45.817 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:45.817 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:45.817 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:45.817 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:45.817 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:45.817 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:45.817 "params": { 00:07:45.817 "name": "Nvme1", 00:07:45.817 "trtype": "tcp", 00:07:45.817 "traddr": "10.0.0.2", 00:07:45.817 "adrfam": "ipv4", 00:07:45.817 "trsvcid": "4420", 00:07:45.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:45.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:45.817 "hdgst": false, 00:07:45.817 "ddgst": false 00:07:45.817 }, 00:07:45.817 "method": "bdev_nvme_attach_controller" 00:07:45.817 }' 00:07:45.817 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:45.817 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:45.817 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:45.817 "params": { 00:07:45.817 "name": "Nvme1", 00:07:45.817 "trtype": "tcp", 00:07:45.817 "traddr": "10.0.0.2", 00:07:45.817 "adrfam": "ipv4", 00:07:45.817 "trsvcid": "4420", 00:07:45.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:45.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:45.817 "hdgst": false, 00:07:45.817 "ddgst": false 00:07:45.817 }, 00:07:45.817 "method": "bdev_nvme_attach_controller" 00:07:45.817 }' 00:07:45.817 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:45.817 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:45.817 "params": { 00:07:45.817 "name": "Nvme1", 00:07:45.817 "trtype": "tcp", 00:07:45.817 "traddr": "10.0.0.2", 00:07:45.817 "adrfam": "ipv4", 00:07:45.817 "trsvcid": "4420", 00:07:45.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:45.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:45.817 "hdgst": false, 00:07:45.817 "ddgst": false 00:07:45.817 }, 00:07:45.817 "method": "bdev_nvme_attach_controller" 00:07:45.817 }' 00:07:45.817 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:45.817 03:58:14 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:45.817 "params": { 00:07:45.817 "name": "Nvme1", 00:07:45.817 "trtype": "tcp", 00:07:45.817 "traddr": "10.0.0.2", 00:07:45.817 "adrfam": "ipv4", 00:07:45.817 "trsvcid": "4420", 00:07:45.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:45.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:45.817 "hdgst": false, 00:07:45.817 "ddgst": false 00:07:45.817 }, 00:07:45.817 "method": "bdev_nvme_attach_controller" 00:07:45.817 }' 00:07:45.817 [2024-12-09 03:58:14.174513] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:07:45.817 [2024-12-09 03:58:14.174623] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:45.817 [2024-12-09 03:58:14.174696] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:07:45.817 [2024-12-09 03:58:14.174696] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:07:45.817 [2024-12-09 03:58:14.174696] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:07:45.817 [2024-12-09 03:58:14.174771] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 03:58:14.174771] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-09 03:58:14.174772] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:45.817 --proc-type=auto ] 00:07:45.817 --proc-type=auto ] 00:07:45.817 [2024-12-09 03:58:14.358909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.077 [2024-12-09 03:58:14.412423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:46.077 [2024-12-09 03:58:14.457389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.077 [2024-12-09 03:58:14.513597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:46.077 [2024-12-09 03:58:14.561645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.077 [2024-12-09 03:58:14.618185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:46.077 [2024-12-09 03:58:14.633890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.337 [2024-12-09 03:58:14.684597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:46.337 Running I/O for 1 seconds... 00:07:46.337 Running I/O for 1 seconds... 00:07:46.337 Running I/O for 1 seconds... 00:07:46.337 Running I/O for 1 seconds... 00:07:47.274 5883.00 IOPS, 22.98 MiB/s 00:07:47.274 Latency(us) 00:07:47.274 [2024-12-09T02:58:15.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.274 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:47.274 Nvme1n1 : 1.02 5894.95 23.03 0.00 0.00 21569.96 7281.78 31457.28 00:07:47.274 [2024-12-09T02:58:15.850Z] =================================================================================================================== 00:07:47.274 [2024-12-09T02:58:15.850Z] Total : 5894.95 23.03 0.00 0.00 21569.96 7281.78 31457.28 00:07:47.274 185784.00 IOPS, 725.72 MiB/s 00:07:47.274 Latency(us) 00:07:47.274 [2024-12-09T02:58:15.850Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.274 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:47.274 Nvme1n1 : 1.00 185437.44 724.37 0.00 0.00 686.51 288.24 1844.72 00:07:47.274 [2024-12-09T02:58:15.850Z] =================================================================================================================== 00:07:47.274 [2024-12-09T02:58:15.850Z] Total : 185437.44 724.37 0.00 0.00 686.51 288.24 1844.72 00:07:47.533 5864.00 IOPS, 22.91 MiB/s 00:07:47.533 Latency(us) 00:07:47.533 [2024-12-09T02:58:16.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.533 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:47.533 Nvme1n1 : 1.01 5971.26 23.33 0.00 0.00 21370.33 4466.16 44467.39 00:07:47.533 [2024-12-09T02:58:16.109Z] =================================================================================================================== 00:07:47.533 [2024-12-09T02:58:16.109Z] Total : 5971.26 23.33 0.00 0.00 21370.33 4466.16 44467.39 00:07:47.533 8137.00 IOPS, 31.79 MiB/s 00:07:47.533 Latency(us) 00:07:47.533 [2024-12-09T02:58:16.109Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:47.533 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:47.533 Nvme1n1 : 1.01 8187.47 31.98 0.00 0.00 15552.06 8155.59 25437.68 00:07:47.533 [2024-12-09T02:58:16.109Z] =================================================================================================================== 00:07:47.533 [2024-12-09T02:58:16.109Z] Total : 8187.47 31.98 0.00 0.00 15552.06 8155.59 25437.68 00:07:47.533 03:58:15 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 142578 00:07:47.533 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 142581 00:07:47.533 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 142583 00:07:47.533 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:47.533 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.533 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:47.533 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.533 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:47.533 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:47.533 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:47.533 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:47.533 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:47.533 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:47.533 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:47.533 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:47.533 rmmod nvme_tcp 00:07:47.533 rmmod nvme_fabrics 00:07:47.792 rmmod nvme_keyring 00:07:47.792 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:47.792 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:47.792 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:47.792 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 142545 ']' 00:07:47.792 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 142545 00:07:47.792 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 142545 ']' 00:07:47.792 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 142545 00:07:47.792 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:47.792 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:47.792 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 142545 00:07:47.792 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:47.792 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:47.792 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 142545' 00:07:47.792 killing process with pid 142545 00:07:47.792 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 142545 00:07:47.792 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 142545 00:07:48.052 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:48.052 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:48.052 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:48.052 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:48.052 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:48.052 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:48.052 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:48.052 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:48.052 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:48.052 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.052 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.052 03:58:16 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.961 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:49.961 00:07:49.961 real 0m7.341s 00:07:49.961 user 0m15.989s 00:07:49.961 sys 0m3.518s 00:07:49.961 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.961 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:49.961 ************************************ 00:07:49.961 END TEST nvmf_bdev_io_wait 00:07:49.961 ************************************ 00:07:49.961 03:58:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:49.961 03:58:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:49.961 03:58:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.961 03:58:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:49.961 ************************************ 00:07:49.961 START TEST nvmf_queue_depth 00:07:49.961 ************************************ 00:07:49.961 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:49.961 * Looking for test storage... 00:07:49.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:49.961 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:49.961 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:07:49.961 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.220 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:50.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.221 --rc genhtml_branch_coverage=1 00:07:50.221 --rc genhtml_function_coverage=1 00:07:50.221 --rc genhtml_legend=1 00:07:50.221 --rc geninfo_all_blocks=1 00:07:50.221 --rc geninfo_unexecuted_blocks=1 00:07:50.221 00:07:50.221 ' 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:50.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.221 --rc genhtml_branch_coverage=1 00:07:50.221 --rc genhtml_function_coverage=1 00:07:50.221 --rc genhtml_legend=1 00:07:50.221 --rc geninfo_all_blocks=1 00:07:50.221 --rc geninfo_unexecuted_blocks=1 00:07:50.221 00:07:50.221 ' 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:50.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.221 --rc genhtml_branch_coverage=1 00:07:50.221 --rc genhtml_function_coverage=1 00:07:50.221 --rc genhtml_legend=1 00:07:50.221 --rc geninfo_all_blocks=1 00:07:50.221 --rc geninfo_unexecuted_blocks=1 00:07:50.221 00:07:50.221 ' 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:50.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.221 --rc genhtml_branch_coverage=1 00:07:50.221 --rc genhtml_function_coverage=1 00:07:50.221 --rc genhtml_legend=1 00:07:50.221 --rc geninfo_all_blocks=1 00:07:50.221 --rc geninfo_unexecuted_blocks=1 00:07:50.221 00:07:50.221 ' 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:50.221 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:07:50.221 03:58:18 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:52.753 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:52.753 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.753 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:52.754 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:52.754 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:52.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:07:52.754 00:07:52.754 --- 10.0.0.2 ping statistics --- 00:07:52.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.754 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:07:52.754 00:07:52.754 --- 10.0.0.1 ping statistics --- 00:07:52.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.754 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=144806 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 144806 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 144806 ']' 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.754 03:58:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:52.754 [2024-12-09 03:58:21.019736] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:07:52.754 [2024-12-09 03:58:21.019820] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.754 [2024-12-09 03:58:21.095053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.754 [2024-12-09 03:58:21.149944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.754 [2024-12-09 03:58:21.150000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.754 [2024-12-09 03:58:21.150028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.754 [2024-12-09 03:58:21.150039] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.754 [2024-12-09 03:58:21.150049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.754 [2024-12-09 03:58:21.150703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.754 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.754 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:52.754 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:52.754 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:52.754 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:52.754 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.754 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:52.754 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.754 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:52.754 [2024-12-09 03:58:21.293036] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.754 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.754 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:52.754 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.754 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:52.754 Malloc0 00:07:52.754 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.754 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:52.754 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.754 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:52.754 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.754 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:52.754 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.754 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.014 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.014 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.014 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.014 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.014 [2024-12-09 03:58:21.339937] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.014 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.014 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=144897 00:07:53.014 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:53.014 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:53.014 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 144897 /var/tmp/bdevperf.sock 00:07:53.014 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 144897 ']' 00:07:53.014 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:53.014 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.014 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:53.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:53.014 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.014 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.014 [2024-12-09 03:58:21.386121] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:07:53.014 [2024-12-09 03:58:21.386199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144897 ] 00:07:53.014 [2024-12-09 03:58:21.452395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.014 [2024-12-09 03:58:21.508981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.272 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.272 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:53.272 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:53.272 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.272 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:53.531 NVMe0n1 00:07:53.531 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.531 03:58:21 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:53.531 Running I/O for 10 seconds... 00:07:55.841 8192.00 IOPS, 32.00 MiB/s [2024-12-09T02:58:25.361Z] 8496.50 IOPS, 33.19 MiB/s [2024-12-09T02:58:26.299Z] 8533.33 IOPS, 33.33 MiB/s [2024-12-09T02:58:27.233Z] 8687.75 IOPS, 33.94 MiB/s [2024-12-09T02:58:28.165Z] 8667.80 IOPS, 33.86 MiB/s [2024-12-09T02:58:29.100Z] 8701.67 IOPS, 33.99 MiB/s [2024-12-09T02:58:30.478Z] 8759.43 IOPS, 34.22 MiB/s [2024-12-09T02:58:31.415Z] 8792.00 IOPS, 34.34 MiB/s [2024-12-09T02:58:32.352Z] 8788.89 IOPS, 34.33 MiB/s [2024-12-09T02:58:32.352Z] 8800.70 IOPS, 34.38 MiB/s 00:08:03.776 Latency(us) 00:08:03.776 [2024-12-09T02:58:32.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:03.776 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:03.776 Verification LBA range: start 0x0 length 0x4000 00:08:03.776 NVMe0n1 : 10.07 8845.46 34.55 0.00 0.00 115299.80 10534.31 71846.87 00:08:03.776 [2024-12-09T02:58:32.352Z] =================================================================================================================== 00:08:03.776 [2024-12-09T02:58:32.352Z] Total : 8845.46 34.55 0.00 0.00 115299.80 10534.31 71846.87 00:08:03.776 { 00:08:03.776 "results": [ 00:08:03.776 { 00:08:03.776 "job": "NVMe0n1", 00:08:03.776 "core_mask": "0x1", 00:08:03.776 "workload": "verify", 00:08:03.776 "status": "finished", 00:08:03.776 "verify_range": { 00:08:03.776 "start": 0, 00:08:03.776 "length": 16384 00:08:03.776 }, 00:08:03.776 "queue_depth": 1024, 00:08:03.776 "io_size": 4096, 00:08:03.776 "runtime": 10.065162, 00:08:03.776 "iops": 8845.461205691474, 00:08:03.776 "mibps": 34.55258283473232, 00:08:03.776 "io_failed": 0, 00:08:03.776 "io_timeout": 0, 00:08:03.776 "avg_latency_us": 115299.79550686674, 00:08:03.776 "min_latency_us": 10534.305185185185, 00:08:03.776 "max_latency_us": 71846.87407407408 00:08:03.776 } 00:08:03.776 ], 00:08:03.776 "core_count": 1 00:08:03.776 } 00:08:03.776 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 144897 00:08:03.776 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 144897 ']' 00:08:03.776 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 144897 00:08:03.777 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:03.777 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:03.777 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 144897 00:08:03.777 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:03.777 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:03.777 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 144897' 00:08:03.777 killing process with pid 144897 00:08:03.777 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 144897 00:08:03.777 Received shutdown signal, test time was about 10.000000 seconds 00:08:03.777 00:08:03.777 Latency(us) 00:08:03.777 [2024-12-09T02:58:32.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:03.777 [2024-12-09T02:58:32.353Z] =================================================================================================================== 00:08:03.777 [2024-12-09T02:58:32.353Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:03.777 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 144897 00:08:04.035 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:04.035 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:04.035 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:04.035 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:04.035 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:04.035 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:04.035 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:04.035 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:04.035 rmmod nvme_tcp 00:08:04.035 rmmod nvme_fabrics 00:08:04.035 rmmod nvme_keyring 00:08:04.035 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:04.035 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:04.035 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:04.035 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 144806 ']' 00:08:04.035 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 144806 00:08:04.035 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 144806 ']' 00:08:04.035 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 144806 00:08:04.035 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:04.035 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.036 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 144806 00:08:04.036 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:04.036 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:04.036 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 144806' 00:08:04.036 killing process with pid 144806 00:08:04.036 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 144806 00:08:04.036 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 144806 00:08:04.295 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:04.295 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:04.295 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:04.295 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:04.295 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:04.295 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:04.295 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:04.295 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:04.295 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:04.295 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.296 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.296 03:58:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:06.841 00:08:06.841 real 0m16.310s 00:08:06.841 user 0m22.930s 00:08:06.841 sys 0m3.166s 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:06.841 ************************************ 00:08:06.841 END TEST nvmf_queue_depth 00:08:06.841 ************************************ 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:06.841 ************************************ 00:08:06.841 START TEST nvmf_target_multipath 00:08:06.841 ************************************ 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:06.841 * Looking for test storage... 00:08:06.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:06.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.841 --rc genhtml_branch_coverage=1 00:08:06.841 --rc genhtml_function_coverage=1 00:08:06.841 --rc genhtml_legend=1 00:08:06.841 --rc geninfo_all_blocks=1 00:08:06.841 --rc geninfo_unexecuted_blocks=1 00:08:06.841 00:08:06.841 ' 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:06.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.841 --rc genhtml_branch_coverage=1 00:08:06.841 --rc genhtml_function_coverage=1 00:08:06.841 --rc genhtml_legend=1 00:08:06.841 --rc geninfo_all_blocks=1 00:08:06.841 --rc geninfo_unexecuted_blocks=1 00:08:06.841 00:08:06.841 ' 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:06.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.841 --rc genhtml_branch_coverage=1 00:08:06.841 --rc genhtml_function_coverage=1 00:08:06.841 --rc genhtml_legend=1 00:08:06.841 --rc geninfo_all_blocks=1 00:08:06.841 --rc geninfo_unexecuted_blocks=1 00:08:06.841 00:08:06.841 ' 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:06.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.841 --rc genhtml_branch_coverage=1 00:08:06.841 --rc genhtml_function_coverage=1 00:08:06.841 --rc genhtml_legend=1 00:08:06.841 --rc geninfo_all_blocks=1 00:08:06.841 --rc geninfo_unexecuted_blocks=1 00:08:06.841 00:08:06.841 ' 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.841 03:58:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:06.841 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.841 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.841 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.841 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.841 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:06.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:06.842 03:58:35 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:08.749 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.749 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:08.749 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:08.749 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:08.749 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:08.750 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:08.750 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:08.750 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:08.750 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:08.750 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:08.751 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:08.751 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:08.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:08:08.751 00:08:08.751 --- 10.0.0.2 ping statistics --- 00:08:08.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.751 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:08:08.751 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:08.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:08:08.751 00:08:08.751 --- 10.0.0.1 ping statistics --- 00:08:08.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.751 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:08:08.751 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.751 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:08.751 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:08.751 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.751 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:08.751 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:08.751 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.751 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:08.751 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:09.011 only one NIC for nvmf test 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:09.011 rmmod nvme_tcp 00:08:09.011 rmmod nvme_fabrics 00:08:09.011 rmmod nvme_keyring 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.011 03:58:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.920 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:10.920 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:10.920 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:10.920 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:10.920 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:10.920 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:10.920 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:10.920 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:10.920 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:10.920 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:10.920 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:10.920 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:10.920 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:10.920 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:10.920 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:10.920 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:10.920 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:10.920 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:10.920 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:10.920 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:10.920 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:10.921 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:10.921 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:10.921 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:10.921 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:10.921 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:10.921 00:08:10.921 real 0m4.633s 00:08:10.921 user 0m0.979s 00:08:10.921 sys 0m1.656s 00:08:10.921 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:10.921 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:10.921 ************************************ 00:08:10.921 END TEST nvmf_target_multipath 00:08:10.921 ************************************ 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:11.180 ************************************ 00:08:11.180 START TEST nvmf_zcopy 00:08:11.180 ************************************ 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:11.180 * Looking for test storage... 00:08:11.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:11.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.180 --rc genhtml_branch_coverage=1 00:08:11.180 --rc genhtml_function_coverage=1 00:08:11.180 --rc genhtml_legend=1 00:08:11.180 --rc geninfo_all_blocks=1 00:08:11.180 --rc geninfo_unexecuted_blocks=1 00:08:11.180 00:08:11.180 ' 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:11.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.180 --rc genhtml_branch_coverage=1 00:08:11.180 --rc genhtml_function_coverage=1 00:08:11.180 --rc genhtml_legend=1 00:08:11.180 --rc geninfo_all_blocks=1 00:08:11.180 --rc geninfo_unexecuted_blocks=1 00:08:11.180 00:08:11.180 ' 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:11.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.180 --rc genhtml_branch_coverage=1 00:08:11.180 --rc genhtml_function_coverage=1 00:08:11.180 --rc genhtml_legend=1 00:08:11.180 --rc geninfo_all_blocks=1 00:08:11.180 --rc geninfo_unexecuted_blocks=1 00:08:11.180 00:08:11.180 ' 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:11.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.180 --rc genhtml_branch_coverage=1 00:08:11.180 --rc genhtml_function_coverage=1 00:08:11.180 --rc genhtml_legend=1 00:08:11.180 --rc geninfo_all_blocks=1 00:08:11.180 --rc geninfo_unexecuted_blocks=1 00:08:11.180 00:08:11.180 ' 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.180 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:11.181 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:11.181 03:58:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:13.721 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:13.721 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:13.721 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:13.721 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:13.721 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:13.722 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:13.722 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:13.722 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:13.722 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:13.722 03:58:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:13.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:13.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:08:13.722 00:08:13.722 --- 10.0.0.2 ping statistics --- 00:08:13.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.722 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:13.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:13.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:08:13.722 00:08:13.722 --- 10.0.0.1 ping statistics --- 00:08:13.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:13.722 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=150163 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 150163 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 150163 ']' 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:13.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:13.722 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.722 [2024-12-09 03:58:42.215209] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:08:13.722 [2024-12-09 03:58:42.215341] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:13.722 [2024-12-09 03:58:42.287864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.981 [2024-12-09 03:58:42.348269] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:13.981 [2024-12-09 03:58:42.348355] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:13.981 [2024-12-09 03:58:42.348385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:13.981 [2024-12-09 03:58:42.348397] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:13.981 [2024-12-09 03:58:42.348407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:13.981 [2024-12-09 03:58:42.349017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.981 [2024-12-09 03:58:42.498483] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.981 [2024-12-09 03:58:42.514693] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.981 malloc0 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:13.981 { 00:08:13.981 "params": { 00:08:13.981 "name": "Nvme$subsystem", 00:08:13.981 "trtype": "$TEST_TRANSPORT", 00:08:13.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:13.981 "adrfam": "ipv4", 00:08:13.981 "trsvcid": "$NVMF_PORT", 00:08:13.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:13.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:13.981 "hdgst": ${hdgst:-false}, 00:08:13.981 "ddgst": ${ddgst:-false} 00:08:13.981 }, 00:08:13.981 "method": "bdev_nvme_attach_controller" 00:08:13.981 } 00:08:13.981 EOF 00:08:13.981 )") 00:08:13.981 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:13.982 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:13.982 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:14.241 03:58:42 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:14.241 "params": { 00:08:14.241 "name": "Nvme1", 00:08:14.241 "trtype": "tcp", 00:08:14.241 "traddr": "10.0.0.2", 00:08:14.241 "adrfam": "ipv4", 00:08:14.241 "trsvcid": "4420", 00:08:14.241 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.241 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.241 "hdgst": false, 00:08:14.241 "ddgst": false 00:08:14.241 }, 00:08:14.241 "method": "bdev_nvme_attach_controller" 00:08:14.241 }' 00:08:14.241 [2024-12-09 03:58:42.602489] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:08:14.241 [2024-12-09 03:58:42.602582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150189 ] 00:08:14.241 [2024-12-09 03:58:42.674887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.242 [2024-12-09 03:58:42.733317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.500 Running I/O for 10 seconds... 00:08:16.370 5944.00 IOPS, 46.44 MiB/s [2024-12-09T02:58:46.331Z] 5934.00 IOPS, 46.36 MiB/s [2024-12-09T02:58:47.264Z] 5930.67 IOPS, 46.33 MiB/s [2024-12-09T02:58:48.199Z] 5937.00 IOPS, 46.38 MiB/s [2024-12-09T02:58:49.184Z] 5934.00 IOPS, 46.36 MiB/s [2024-12-09T02:58:50.132Z] 5930.33 IOPS, 46.33 MiB/s [2024-12-09T02:58:51.064Z] 5939.14 IOPS, 46.40 MiB/s [2024-12-09T02:58:52.000Z] 5937.38 IOPS, 46.39 MiB/s [2024-12-09T02:58:53.377Z] 5943.22 IOPS, 46.43 MiB/s [2024-12-09T02:58:53.377Z] 5948.30 IOPS, 46.47 MiB/s 00:08:24.801 Latency(us) 00:08:24.801 [2024-12-09T02:58:53.377Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.801 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:24.801 Verification LBA range: start 0x0 length 0x1000 00:08:24.801 Nvme1n1 : 10.02 5950.28 46.49 0.00 0.00 21453.21 3058.35 29515.47 00:08:24.801 [2024-12-09T02:58:53.377Z] =================================================================================================================== 00:08:24.801 [2024-12-09T02:58:53.377Z] Total : 5950.28 46.49 0.00 0.00 21453.21 3058.35 29515.47 00:08:24.801 03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=151466 00:08:24.801 03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:24.801 03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:24.801 03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:24.801 03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:24.801 03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:24.801 03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:24.801 03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:24.801 03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:24.801 { 00:08:24.801 "params": { 00:08:24.801 "name": "Nvme$subsystem", 00:08:24.801 "trtype": "$TEST_TRANSPORT", 00:08:24.801 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:24.801 "adrfam": "ipv4", 00:08:24.801 "trsvcid": "$NVMF_PORT", 00:08:24.801 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:24.801 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:24.801 "hdgst": ${hdgst:-false}, 00:08:24.801 "ddgst": ${ddgst:-false} 00:08:24.801 }, 00:08:24.801 "method": "bdev_nvme_attach_controller" 00:08:24.801 } 00:08:24.801 EOF 00:08:24.801 )") 00:08:24.801 03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:24.801 [2024-12-09 03:58:53.200569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.801 [2024-12-09 03:58:53.200611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.801 03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:24.801 03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:24.801 03:58:53 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:24.801 "params": { 00:08:24.801 "name": "Nvme1", 00:08:24.801 "trtype": "tcp", 00:08:24.801 "traddr": "10.0.0.2", 00:08:24.801 "adrfam": "ipv4", 00:08:24.801 "trsvcid": "4420", 00:08:24.801 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.801 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:24.801 "hdgst": false, 00:08:24.801 "ddgst": false 00:08:24.801 }, 00:08:24.801 "method": "bdev_nvme_attach_controller" 00:08:24.801 }' 00:08:24.801 [2024-12-09 03:58:53.208512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.801 [2024-12-09 03:58:53.208537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.802 [2024-12-09 03:58:53.216528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.802 [2024-12-09 03:58:53.216550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.802 [2024-12-09 03:58:53.224549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.802 [2024-12-09 03:58:53.224594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.802 [2024-12-09 03:58:53.232589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.802 [2024-12-09 03:58:53.232610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.802 [2024-12-09 03:58:53.238572] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:08:24.802 [2024-12-09 03:58:53.238644] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151466 ] 00:08:24.802 [2024-12-09 03:58:53.240608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.802 [2024-12-09 03:58:53.240642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.802 [2024-12-09 03:58:53.248644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.802 [2024-12-09 03:58:53.248664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.802 [2024-12-09 03:58:53.256644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.802 [2024-12-09 03:58:53.256663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.802 [2024-12-09 03:58:53.264667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.802 [2024-12-09 03:58:53.264686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.802 [2024-12-09 03:58:53.272692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.802 [2024-12-09 03:58:53.272713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.802 [2024-12-09 03:58:53.280708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.802 [2024-12-09 03:58:53.280728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.802 [2024-12-09 03:58:53.288719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.802 [2024-12-09 03:58:53.288739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.802 [2024-12-09 03:58:53.296742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.802 [2024-12-09 03:58:53.296761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.802 [2024-12-09 03:58:53.304765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.802 [2024-12-09 03:58:53.304784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.802 [2024-12-09 03:58:53.308602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.802 [2024-12-09 03:58:53.312785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.802 [2024-12-09 03:58:53.312804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.802 [2024-12-09 03:58:53.320844] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.802 [2024-12-09 03:58:53.320882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.802 [2024-12-09 03:58:53.328840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.802 [2024-12-09 03:58:53.328865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.802 [2024-12-09 03:58:53.336849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.802 [2024-12-09 03:58:53.336868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.802 [2024-12-09 03:58:53.344870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.802 [2024-12-09 03:58:53.344889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.802 [2024-12-09 03:58:53.352892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.802 [2024-12-09 03:58:53.352912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.802 [2024-12-09 03:58:53.360911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.802 [2024-12-09 03:58:53.360939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.802 [2024-12-09 03:58:53.368936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.802 [2024-12-09 03:58:53.368956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:24.802 [2024-12-09 03:58:53.369212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.802 [2024-12-09 03:58:53.376966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:24.802 [2024-12-09 03:58:53.376988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.061 [2024-12-09 03:58:53.385012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.061 [2024-12-09 03:58:53.385043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.061 [2024-12-09 03:58:53.393033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.061 [2024-12-09 03:58:53.393069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.401055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.401091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.409073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.409111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.417100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.417139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.425120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.425156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.433137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.433174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.441134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.441155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.449191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.449229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.457207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.457244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.465202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.465225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.473219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.473238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.481243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.481288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.489296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.489347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.497333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.497356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.505355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.505386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.513369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.513392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.521372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.521407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.529392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.529413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.537423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.537443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.545444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.545464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.553468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.553491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.561495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.561518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.569515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.569537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.577539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.577573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.585574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.585594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.593595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.593628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.601611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.601634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.609639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.609659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.617663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.617682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.625670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.625689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.062 [2024-12-09 03:58:53.633726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.062 [2024-12-09 03:58:53.633747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.641728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.641747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.649754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.649775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.657774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.657799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.665796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.665816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.673820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.673840] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.681842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.681861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.689870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.689891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.697892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.697912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.705920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.705944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.713937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.713958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 Running I/O for 5 seconds... 00:08:25.321 [2024-12-09 03:58:53.721958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.721979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.736039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.736068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.749105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.749133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.759331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.759359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.769795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.769823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.780761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.780788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.791889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.791916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.802239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.802266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.812576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.812603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.823709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.823736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.836139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.836167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.846167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.846206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.856703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.856730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.867220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.867247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.877986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.878013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.321 [2024-12-09 03:58:53.890030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.321 [2024-12-09 03:58:53.890057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.580 [2024-12-09 03:58:53.899460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.580 [2024-12-09 03:58:53.899487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.580 [2024-12-09 03:58:53.910492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.580 [2024-12-09 03:58:53.910519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.580 [2024-12-09 03:58:53.922987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.580 [2024-12-09 03:58:53.923014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.580 [2024-12-09 03:58:53.933104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.580 [2024-12-09 03:58:53.933132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.580 [2024-12-09 03:58:53.943554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.580 [2024-12-09 03:58:53.943581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.580 [2024-12-09 03:58:53.954074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.580 [2024-12-09 03:58:53.954100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.580 [2024-12-09 03:58:53.964521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.580 [2024-12-09 03:58:53.964547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.580 [2024-12-09 03:58:53.975248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.580 [2024-12-09 03:58:53.975283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.580 [2024-12-09 03:58:53.986202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.580 [2024-12-09 03:58:53.986229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.580 [2024-12-09 03:58:53.998547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.580 [2024-12-09 03:58:53.998574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.580 [2024-12-09 03:58:54.008080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.580 [2024-12-09 03:58:54.008107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.580 [2024-12-09 03:58:54.020708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.580 [2024-12-09 03:58:54.020735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.580 [2024-12-09 03:58:54.030657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.580 [2024-12-09 03:58:54.030684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.581 [2024-12-09 03:58:54.040900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.581 [2024-12-09 03:58:54.040927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.581 [2024-12-09 03:58:54.051387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.581 [2024-12-09 03:58:54.051423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.581 [2024-12-09 03:58:54.061859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.581 [2024-12-09 03:58:54.061886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.581 [2024-12-09 03:58:54.072073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.581 [2024-12-09 03:58:54.072100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.581 [2024-12-09 03:58:54.082391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.581 [2024-12-09 03:58:54.082418] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.581 [2024-12-09 03:58:54.092870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.581 [2024-12-09 03:58:54.092897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.581 [2024-12-09 03:58:54.103124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.581 [2024-12-09 03:58:54.103151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.581 [2024-12-09 03:58:54.113524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.581 [2024-12-09 03:58:54.113552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.581 [2024-12-09 03:58:54.125943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.581 [2024-12-09 03:58:54.125971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.581 [2024-12-09 03:58:54.136471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.581 [2024-12-09 03:58:54.136498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.581 [2024-12-09 03:58:54.147153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.581 [2024-12-09 03:58:54.147179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.160638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.160666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.172148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.172175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.180857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.180884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.192405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.192431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.202832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.202859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.213150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.213177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.223950] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.223976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.234904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.234931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.247761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.247789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.258085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.258112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.268284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.268311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.278987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.279015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.291388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.291416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.301222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.301250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.311683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.311710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.322039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.322066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.332511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.332538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.343370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.343397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.355928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.355956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.365696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.365723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.375831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.375857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.386061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.386087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.396456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.396484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:25.840 [2024-12-09 03:58:54.406488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:25.840 [2024-12-09 03:58:54.406515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.417129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.099 [2024-12-09 03:58:54.417156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.427820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.099 [2024-12-09 03:58:54.427847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.438415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.099 [2024-12-09 03:58:54.438442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.452050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.099 [2024-12-09 03:58:54.452077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.462200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.099 [2024-12-09 03:58:54.462228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.472697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.099 [2024-12-09 03:58:54.472725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.482959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.099 [2024-12-09 03:58:54.482985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.493229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.099 [2024-12-09 03:58:54.493256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.503772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.099 [2024-12-09 03:58:54.503800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.516467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.099 [2024-12-09 03:58:54.516494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.525524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.099 [2024-12-09 03:58:54.525551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.538291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.099 [2024-12-09 03:58:54.538317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.548598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.099 [2024-12-09 03:58:54.548625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.559197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.099 [2024-12-09 03:58:54.559224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.569595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.099 [2024-12-09 03:58:54.569622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.579931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.099 [2024-12-09 03:58:54.579959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.590330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.099 [2024-12-09 03:58:54.590358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.600559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.099 [2024-12-09 03:58:54.600586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.610729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.099 [2024-12-09 03:58:54.610770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.621104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.099 [2024-12-09 03:58:54.621145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.631643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.099 [2024-12-09 03:58:54.631669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.099 [2024-12-09 03:58:54.642505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.100 [2024-12-09 03:58:54.642532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.100 [2024-12-09 03:58:54.653187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.100 [2024-12-09 03:58:54.653215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.100 [2024-12-09 03:58:54.664363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.100 [2024-12-09 03:58:54.664392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.358 [2024-12-09 03:58:54.676862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.358 [2024-12-09 03:58:54.676889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.358 [2024-12-09 03:58:54.686937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.358 [2024-12-09 03:58:54.686966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.358 [2024-12-09 03:58:54.697636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.358 [2024-12-09 03:58:54.697663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.358 [2024-12-09 03:58:54.708165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.358 [2024-12-09 03:58:54.708191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.358 [2024-12-09 03:58:54.718547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.358 [2024-12-09 03:58:54.718574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.358 11957.00 IOPS, 93.41 MiB/s [2024-12-09T02:58:54.934Z] [2024-12-09 03:58:54.729056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.358 [2024-12-09 03:58:54.729083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.358 [2024-12-09 03:58:54.739922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.358 [2024-12-09 03:58:54.739949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.358 [2024-12-09 03:58:54.750385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.358 [2024-12-09 03:58:54.750412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.359 [2024-12-09 03:58:54.761074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.359 [2024-12-09 03:58:54.761101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.359 [2024-12-09 03:58:54.771854] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.359 [2024-12-09 03:58:54.771881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.359 [2024-12-09 03:58:54.782655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.359 [2024-12-09 03:58:54.782682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.359 [2024-12-09 03:58:54.792883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.359 [2024-12-09 03:58:54.792910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.359 [2024-12-09 03:58:54.803164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.359 [2024-12-09 03:58:54.803190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.359 [2024-12-09 03:58:54.814035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.359 [2024-12-09 03:58:54.814061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.359 [2024-12-09 03:58:54.824616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.359 [2024-12-09 03:58:54.824642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.359 [2024-12-09 03:58:54.836720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.359 [2024-12-09 03:58:54.836747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.359 [2024-12-09 03:58:54.846396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.359 [2024-12-09 03:58:54.846422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.359 [2024-12-09 03:58:54.857339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.359 [2024-12-09 03:58:54.857375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.359 [2024-12-09 03:58:54.867804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.359 [2024-12-09 03:58:54.867831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.359 [2024-12-09 03:58:54.878053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.359 [2024-12-09 03:58:54.878080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.359 [2024-12-09 03:58:54.888769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.359 [2024-12-09 03:58:54.888798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.359 [2024-12-09 03:58:54.901013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.359 [2024-12-09 03:58:54.901040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.359 [2024-12-09 03:58:54.912476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.359 [2024-12-09 03:58:54.912502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.359 [2024-12-09 03:58:54.921697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.359 [2024-12-09 03:58:54.921724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.359 [2024-12-09 03:58:54.933070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.359 [2024-12-09 03:58:54.933098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:54.945450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:54.945477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:54.955420] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:54.955447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:54.965918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:54.965944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:54.976378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:54.976406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:54.986960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:54.986987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:54.997313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:54.997340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:55.007938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:55.007965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:55.018440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:55.018467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:55.029157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:55.029183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:55.039451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:55.039478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:55.050036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:55.050063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:55.060503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:55.060541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:55.070956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:55.070984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:55.081278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:55.081305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:55.091678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:55.091705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:55.102112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:55.102139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:55.112841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:55.112868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:55.125254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:55.125291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:55.136800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:55.136827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:55.145863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:55.145890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:55.157356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:55.157382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:55.169384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:55.169411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:55.179032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:55.179058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.617 [2024-12-09 03:58:55.189474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.617 [2024-12-09 03:58:55.189501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.201613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.201640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.211709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.211736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.222196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.222222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.232668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.232695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.243493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.243520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.255785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.255811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.265455] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.265492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.275882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.275909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.286305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.286332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.296638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.296665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.306896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.306925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.317335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.317362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.327880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.327907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.338507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.338534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.349268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.349306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.359605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.359640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.371874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.371903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.381479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.381507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.391884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.391912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.402803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.402830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.415051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.415078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.425356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.425383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.435808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.435836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.446024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.446051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:26.880 [2024-12-09 03:58:55.456201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:26.880 [2024-12-09 03:58:55.456228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.466413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.466449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.477141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.477169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.489529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.489556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.498961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.498987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.509343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.509370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.519519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.519547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.529993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.530021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.540303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.540330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.550331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.550357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.560890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.560916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.571359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.571386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.582146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.582172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.594509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.594536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.606457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.606484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.615595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.615622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.626797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.626824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.639201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.639228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.649363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.649390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.659712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.659739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.669942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.669970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.680002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.680030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.690223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.690252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.700240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.700267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.138 [2024-12-09 03:58:55.710611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.138 [2024-12-09 03:58:55.710639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.721146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.721173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 12070.50 IOPS, 94.30 MiB/s [2024-12-09T02:58:55.972Z] [2024-12-09 03:58:55.731691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.731718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.742861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.742888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.755057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.755084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.764992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.765019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.775770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.775797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.786347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.786374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.796802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.796829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.807129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.807156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.818055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.818082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.830743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.830771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.840782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.840808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.851045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.851072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.861438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.861466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.871980] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.872007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.882595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.882622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.893235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.893261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.903952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.903978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.914359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.914386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.926738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.926765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.936710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.936737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.946901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.946928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.957755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.957781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.396 [2024-12-09 03:58:55.970248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.396 [2024-12-09 03:58:55.970282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.654 [2024-12-09 03:58:55.979905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.654 [2024-12-09 03:58:55.979932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.654 [2024-12-09 03:58:55.990174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.654 [2024-12-09 03:58:55.990201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.654 [2024-12-09 03:58:56.000535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.654 [2024-12-09 03:58:56.000562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.654 [2024-12-09 03:58:56.011065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.654 [2024-12-09 03:58:56.011093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.654 [2024-12-09 03:58:56.023566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.654 [2024-12-09 03:58:56.023594] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.654 [2024-12-09 03:58:56.032984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.654 [2024-12-09 03:58:56.033011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.654 [2024-12-09 03:58:56.044051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.654 [2024-12-09 03:58:56.044078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.654 [2024-12-09 03:58:56.054620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.654 [2024-12-09 03:58:56.054647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.654 [2024-12-09 03:58:56.065001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.654 [2024-12-09 03:58:56.065037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.654 [2024-12-09 03:58:56.075107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.655 [2024-12-09 03:58:56.075133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.655 [2024-12-09 03:58:56.085459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.655 [2024-12-09 03:58:56.085486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.655 [2024-12-09 03:58:56.096097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.655 [2024-12-09 03:58:56.096124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.655 [2024-12-09 03:58:56.106399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.655 [2024-12-09 03:58:56.106427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.655 [2024-12-09 03:58:56.116926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.655 [2024-12-09 03:58:56.116953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.655 [2024-12-09 03:58:56.129523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.655 [2024-12-09 03:58:56.129550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.655 [2024-12-09 03:58:56.140643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.655 [2024-12-09 03:58:56.140670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.655 [2024-12-09 03:58:56.149442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.655 [2024-12-09 03:58:56.149469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.655 [2024-12-09 03:58:56.160982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.655 [2024-12-09 03:58:56.161009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.655 [2024-12-09 03:58:56.173696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.655 [2024-12-09 03:58:56.173723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.655 [2024-12-09 03:58:56.183822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.655 [2024-12-09 03:58:56.183849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.655 [2024-12-09 03:58:56.194108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.655 [2024-12-09 03:58:56.194135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.655 [2024-12-09 03:58:56.204602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.655 [2024-12-09 03:58:56.204628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.655 [2024-12-09 03:58:56.215040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.655 [2024-12-09 03:58:56.215067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.655 [2024-12-09 03:58:56.225432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.655 [2024-12-09 03:58:56.225458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.235732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.235774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.246820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.246847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.257170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.257198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.267571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.267606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.277842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.277868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.288005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.288031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.298571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.298597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.310833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.310860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.320050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.320078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.330351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.330377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.341017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.341044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.353584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.353610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.363800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.363827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.373907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.373935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.384499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.384526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.395021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.395048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.405404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.405431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.415709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.415736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.426404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.426431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.436529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.436556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.446805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.446832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.457317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.457343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.467682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.467717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.478465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.478492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:27.913 [2024-12-09 03:58:56.488954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:27.913 [2024-12-09 03:58:56.488983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.499610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.499637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.510253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.510290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.520636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.520664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.530893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.530930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.541669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.541696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.552479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.552506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.563715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.563742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.577094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.577122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.587627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.587654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.597975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.598002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.608610] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.608638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.619119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.619146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.629969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.629997] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.643782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.643809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.654403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.654432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.664836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.664863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.675418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.675452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.685832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.685859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.696332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.696359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.708541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.708569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.718593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.718620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 12087.33 IOPS, 94.43 MiB/s [2024-12-09T02:58:56.748Z] [2024-12-09 03:58:56.729101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.729127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.172 [2024-12-09 03:58:56.739785] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.172 [2024-12-09 03:58:56.739811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.750015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.750042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.760668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.760695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.771360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.771387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.785599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.785626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.796323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.796350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.806789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.806816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.817328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.817355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.827940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.827968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.840508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.840536] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.851012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.851048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.861208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.861235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.872014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.872041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.884174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.884201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.893808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.893835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.904460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.904488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.916655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.916682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.926257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.926294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.937235] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.937262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.949803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.949831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.959762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.959789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.970386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.970413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.981158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.981185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:56.993586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:56.993612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.431 [2024-12-09 03:58:57.002788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.431 [2024-12-09 03:58:57.002815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.015621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.015648] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.027577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.027603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.037421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.037448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.047597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.047624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.058229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.058256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.068677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.068703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.079061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.079088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.089499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.089526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.100126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.100153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.110409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.110435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.120862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.120889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.131560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.131587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.144071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.144113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.154218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.154244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.164808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.164835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.176892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.176919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.186441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.186468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.196454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.196481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.207157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.207184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.217807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.217834] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.228315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.228342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.241752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.241779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.253383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.253409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.690 [2024-12-09 03:58:57.262748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.690 [2024-12-09 03:58:57.262776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.948 [2024-12-09 03:58:57.273436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.948 [2024-12-09 03:58:57.273463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.948 [2024-12-09 03:58:57.285747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.948 [2024-12-09 03:58:57.285773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.948 [2024-12-09 03:58:57.295987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.948 [2024-12-09 03:58:57.296013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.948 [2024-12-09 03:58:57.306568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.948 [2024-12-09 03:58:57.306595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.948 [2024-12-09 03:58:57.316745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.948 [2024-12-09 03:58:57.316771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.948 [2024-12-09 03:58:57.326946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.948 [2024-12-09 03:58:57.326973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.948 [2024-12-09 03:58:57.337449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.948 [2024-12-09 03:58:57.337476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.948 [2024-12-09 03:58:57.348049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.948 [2024-12-09 03:58:57.348075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.948 [2024-12-09 03:58:57.358680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.949 [2024-12-09 03:58:57.358707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.949 [2024-12-09 03:58:57.370914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.949 [2024-12-09 03:58:57.370941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.949 [2024-12-09 03:58:57.379838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.949 [2024-12-09 03:58:57.379865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.949 [2024-12-09 03:58:57.391041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.949 [2024-12-09 03:58:57.391068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.949 [2024-12-09 03:58:57.401791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.949 [2024-12-09 03:58:57.401819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.949 [2024-12-09 03:58:57.412459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.949 [2024-12-09 03:58:57.412487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.949 [2024-12-09 03:58:57.424794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.949 [2024-12-09 03:58:57.424821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.949 [2024-12-09 03:58:57.435054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.949 [2024-12-09 03:58:57.435081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.949 [2024-12-09 03:58:57.445881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.949 [2024-12-09 03:58:57.445909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.949 [2024-12-09 03:58:57.458816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.949 [2024-12-09 03:58:57.458844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.949 [2024-12-09 03:58:57.469063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.949 [2024-12-09 03:58:57.469089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.949 [2024-12-09 03:58:57.479781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.949 [2024-12-09 03:58:57.479808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.949 [2024-12-09 03:58:57.493087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.949 [2024-12-09 03:58:57.493124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.949 [2024-12-09 03:58:57.505707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.949 [2024-12-09 03:58:57.505734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:28.949 [2024-12-09 03:58:57.515637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:28.949 [2024-12-09 03:58:57.515665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.526287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.526313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.537017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.537044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.549077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.549104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.558729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.558757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.569421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.569449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.579430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.579457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.592048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.592075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.602167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.602194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.612596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.612623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.622966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.622993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.633513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.633540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.644059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.644087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.654501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.654529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.664736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.664764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.675122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.675150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.685359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.685386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.695562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.695598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.706095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.706122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.716471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.716498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.727220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.727248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 12090.25 IOPS, 94.46 MiB/s [2024-12-09T02:58:57.784Z] [2024-12-09 03:58:57.738106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.738142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.748592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.748620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.760897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.760924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.770804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.770831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.208 [2024-12-09 03:58:57.781043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.208 [2024-12-09 03:58:57.781070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:57.791635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:57.791662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:57.802064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:57.802090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:57.812591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:57.812618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:57.823064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:57.823090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:57.833314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:57.833341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:57.844153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:57.844181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:57.854589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:57.854617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:57.865075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:57.865102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:57.875565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:57.875592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:57.886341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:57.886369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:57.898972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:57.899010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:57.909447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:57.909474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:57.920004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:57.920030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:57.931910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:57.931937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:57.941184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:57.941211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:57.951886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:57.951914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:57.962628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:57.962656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:57.973105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:57.973133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:57.985549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:57.985576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:57.995541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:57.995572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:58.006173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:58.006200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:58.016925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:58.016952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:58.027868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:58.027896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.467 [2024-12-09 03:58:58.040678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.467 [2024-12-09 03:58:58.040705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.052468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.052495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.061070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.061097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.072786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.072813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.083573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.083600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.094385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.094412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.107941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.107968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.118221] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.118248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.128789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.128816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.141180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.141207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.150712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.150738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.161331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.161357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.172414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.172441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.183105] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.183132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.193374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.193401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.204062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.204089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.214734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.214761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.225167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.225194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.235586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.235613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.245861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.245887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.256203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.256229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.266399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.266426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.277155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.277183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.289907] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.289934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.301780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.301809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.311108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.311150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:29.747 [2024-12-09 03:58:58.322731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:29.747 [2024-12-09 03:58:58.322758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.333180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.333207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.343823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.343849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.356968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.356995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.367325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.367353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.377743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.377769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.388144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.388171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.398393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.398420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.408493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.408521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.418748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.418774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.429417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.429444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.439710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.439737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.449743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.449783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.460376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.460404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.471397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.471424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.482262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.482299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.495598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.495625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.507322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.507348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.516556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.516583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.528255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.528290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.540162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.540189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.549594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.549621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.560746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.560772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.006 [2024-12-09 03:58:58.573339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.006 [2024-12-09 03:58:58.573367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.265 [2024-12-09 03:58:58.583431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.265 [2024-12-09 03:58:58.583458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.265 [2024-12-09 03:58:58.593896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.265 [2024-12-09 03:58:58.593923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.265 [2024-12-09 03:58:58.604200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.265 [2024-12-09 03:58:58.604227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.265 [2024-12-09 03:58:58.614601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.265 [2024-12-09 03:58:58.614627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.265 [2024-12-09 03:58:58.625085] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.265 [2024-12-09 03:58:58.625113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.265 [2024-12-09 03:58:58.635547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.265 [2024-12-09 03:58:58.635573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.265 [2024-12-09 03:58:58.645929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.265 [2024-12-09 03:58:58.645956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.265 [2024-12-09 03:58:58.656314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.265 [2024-12-09 03:58:58.656341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.265 [2024-12-09 03:58:58.666689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.265 [2024-12-09 03:58:58.666716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.265 [2024-12-09 03:58:58.677207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.265 [2024-12-09 03:58:58.677233] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.265 [2024-12-09 03:58:58.687709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.265 [2024-12-09 03:58:58.687736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.265 [2024-12-09 03:58:58.700737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.265 [2024-12-09 03:58:58.700764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.266 [2024-12-09 03:58:58.710877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.266 [2024-12-09 03:58:58.710904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.266 [2024-12-09 03:58:58.721303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.266 [2024-12-09 03:58:58.721330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.266 12089.60 IOPS, 94.45 MiB/s [2024-12-09T02:58:58.842Z] [2024-12-09 03:58:58.731577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.266 [2024-12-09 03:58:58.731616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.266 [2024-12-09 03:58:58.739642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.266 [2024-12-09 03:58:58.739668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.266 00:08:30.266 Latency(us) 00:08:30.266 [2024-12-09T02:58:58.842Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.266 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:30.266 Nvme1n1 : 5.01 12091.00 94.46 0.00 0.00 10573.47 4636.07 21068.61 00:08:30.266 [2024-12-09T02:58:58.842Z] =================================================================================================================== 00:08:30.266 [2024-12-09T02:58:58.842Z] Total : 12091.00 94.46 0.00 0.00 10573.47 4636.07 21068.61 00:08:30.266 [2024-12-09 03:58:58.746386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.266 [2024-12-09 03:58:58.746411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.266 [2024-12-09 03:58:58.754401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.266 [2024-12-09 03:58:58.754425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.266 [2024-12-09 03:58:58.762415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.266 [2024-12-09 03:58:58.762446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.266 [2024-12-09 03:58:58.770479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.266 [2024-12-09 03:58:58.770523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.266 [2024-12-09 03:58:58.778516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.266 [2024-12-09 03:58:58.778574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.266 [2024-12-09 03:58:58.786538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.266 [2024-12-09 03:58:58.786581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.266 [2024-12-09 03:58:58.794549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.266 [2024-12-09 03:58:58.794595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.266 [2024-12-09 03:58:58.802565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.266 [2024-12-09 03:58:58.802611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.266 [2024-12-09 03:58:58.810598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.266 [2024-12-09 03:58:58.810644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.266 [2024-12-09 03:58:58.818617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.266 [2024-12-09 03:58:58.818661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.266 [2024-12-09 03:58:58.826645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.266 [2024-12-09 03:58:58.826691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.266 [2024-12-09 03:58:58.834666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.266 [2024-12-09 03:58:58.834715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.525 [2024-12-09 03:58:58.842688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.525 [2024-12-09 03:58:58.842744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.525 [2024-12-09 03:58:58.850716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.525 [2024-12-09 03:58:58.850762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.525 [2024-12-09 03:58:58.858732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.525 [2024-12-09 03:58:58.858778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.525 [2024-12-09 03:58:58.866749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.525 [2024-12-09 03:58:58.866796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.525 [2024-12-09 03:58:58.874770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.525 [2024-12-09 03:58:58.874818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.525 [2024-12-09 03:58:58.882727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.525 [2024-12-09 03:58:58.882747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.525 [2024-12-09 03:58:58.890746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.525 [2024-12-09 03:58:58.890765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.525 [2024-12-09 03:58:58.898768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.525 [2024-12-09 03:58:58.898787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.525 [2024-12-09 03:58:58.906790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.525 [2024-12-09 03:58:58.906809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.525 [2024-12-09 03:58:58.914843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.525 [2024-12-09 03:58:58.914877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.525 [2024-12-09 03:58:58.922898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.525 [2024-12-09 03:58:58.922947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.525 [2024-12-09 03:58:58.930922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.525 [2024-12-09 03:58:58.930973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.525 [2024-12-09 03:58:58.938881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.525 [2024-12-09 03:58:58.938900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.525 [2024-12-09 03:58:58.946900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.525 [2024-12-09 03:58:58.946919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.525 [2024-12-09 03:58:58.954924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:30.525 [2024-12-09 03:58:58.954943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:30.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (151466) - No such process 00:08:30.525 03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 151466 00:08:30.525 03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.525 03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.525 03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.525 03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.525 03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:30.525 03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.525 03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.525 delay0 00:08:30.525 03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.525 03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:30.525 03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.525 03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:30.525 03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.525 03:58:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:30.784 [2024-12-09 03:58:59.118455] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:37.344 [2024-12-09 03:59:05.588973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x120cc30 is same with the state(6) to be set 00:08:37.344 Initializing NVMe Controllers 00:08:37.344 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:37.344 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:37.344 Initialization complete. Launching workers. 00:08:37.344 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 2363 00:08:37.344 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 2650, failed to submit 33 00:08:37.344 success 2511, unsuccessful 139, failed 0 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:37.344 rmmod nvme_tcp 00:08:37.344 rmmod nvme_fabrics 00:08:37.344 rmmod nvme_keyring 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 150163 ']' 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 150163 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 150163 ']' 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 150163 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 150163 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 150163' 00:08:37.344 killing process with pid 150163 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 150163 00:08:37.344 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 150163 00:08:37.604 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:37.604 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:37.604 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:37.604 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:37.604 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:37.604 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:37.604 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:37.604 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:37.604 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:37.604 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:37.604 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:37.604 03:59:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.515 03:59:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:39.515 00:08:39.515 real 0m28.463s 00:08:39.515 user 0m42.697s 00:08:39.515 sys 0m7.595s 00:08:39.515 03:59:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.515 03:59:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:39.515 ************************************ 00:08:39.515 END TEST nvmf_zcopy 00:08:39.515 ************************************ 00:08:39.515 03:59:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:39.515 03:59:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:39.515 03:59:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.515 03:59:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.515 ************************************ 00:08:39.515 START TEST nvmf_nmic 00:08:39.515 ************************************ 00:08:39.515 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:39.515 * Looking for test storage... 00:08:39.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.515 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:39.515 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:08:39.515 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.774 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:39.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.775 --rc genhtml_branch_coverage=1 00:08:39.775 --rc genhtml_function_coverage=1 00:08:39.775 --rc genhtml_legend=1 00:08:39.775 --rc geninfo_all_blocks=1 00:08:39.775 --rc geninfo_unexecuted_blocks=1 00:08:39.775 00:08:39.775 ' 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:39.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.775 --rc genhtml_branch_coverage=1 00:08:39.775 --rc genhtml_function_coverage=1 00:08:39.775 --rc genhtml_legend=1 00:08:39.775 --rc geninfo_all_blocks=1 00:08:39.775 --rc geninfo_unexecuted_blocks=1 00:08:39.775 00:08:39.775 ' 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:39.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.775 --rc genhtml_branch_coverage=1 00:08:39.775 --rc genhtml_function_coverage=1 00:08:39.775 --rc genhtml_legend=1 00:08:39.775 --rc geninfo_all_blocks=1 00:08:39.775 --rc geninfo_unexecuted_blocks=1 00:08:39.775 00:08:39.775 ' 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:39.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.775 --rc genhtml_branch_coverage=1 00:08:39.775 --rc genhtml_function_coverage=1 00:08:39.775 --rc genhtml_legend=1 00:08:39.775 --rc geninfo_all_blocks=1 00:08:39.775 --rc geninfo_unexecuted_blocks=1 00:08:39.775 00:08:39.775 ' 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.775 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:08:39.775 03:59:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:42.308 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:42.308 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:42.308 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:42.308 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:42.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:42.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.352 ms 00:08:42.308 00:08:42.308 --- 10.0.0.2 ping statistics --- 00:08:42.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.308 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:08:42.308 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:42.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:42.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:08:42.308 00:08:42.308 --- 10.0.0.1 ping statistics --- 00:08:42.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:42.309 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=154906 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 154906 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 154906 ']' 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:42.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.309 [2024-12-09 03:59:10.587953] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:08:42.309 [2024-12-09 03:59:10.588049] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:42.309 [2024-12-09 03:59:10.664322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:42.309 [2024-12-09 03:59:10.727317] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:42.309 [2024-12-09 03:59:10.727372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:42.309 [2024-12-09 03:59:10.727386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:42.309 [2024-12-09 03:59:10.727398] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:42.309 [2024-12-09 03:59:10.727408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:42.309 [2024-12-09 03:59:10.728976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.309 [2024-12-09 03:59:10.729003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:42.309 [2024-12-09 03:59:10.729061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:42.309 [2024-12-09 03:59:10.729064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.309 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.309 [2024-12-09 03:59:10.880537] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:42.567 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.567 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:42.567 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.567 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.567 Malloc0 00:08:42.567 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.567 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:42.567 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.567 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.567 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.567 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:42.567 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.567 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.567 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.567 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:42.567 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.567 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.567 [2024-12-09 03:59:10.953564] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.567 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.567 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:42.567 test case1: single bdev can't be used in multiple subsystems 00:08:42.567 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:42.567 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.568 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.568 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.568 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:42.568 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.568 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.568 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.568 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:42.568 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:42.568 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.568 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.568 [2024-12-09 03:59:10.977344] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:42.568 [2024-12-09 03:59:10.977375] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:42.568 [2024-12-09 03:59:10.977391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.568 request: 00:08:42.568 { 00:08:42.568 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:42.568 "namespace": { 00:08:42.568 "bdev_name": "Malloc0", 00:08:42.568 "no_auto_visible": false, 00:08:42.568 "hide_metadata": false 00:08:42.568 }, 00:08:42.568 "method": "nvmf_subsystem_add_ns", 00:08:42.568 "req_id": 1 00:08:42.568 } 00:08:42.568 Got JSON-RPC error response 00:08:42.568 response: 00:08:42.568 { 00:08:42.568 "code": -32602, 00:08:42.568 "message": "Invalid parameters" 00:08:42.568 } 00:08:42.568 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:42.568 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:42.568 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:42.568 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:42.568 Adding namespace failed - expected result. 00:08:42.568 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:42.568 test case2: host connect to nvmf target in multiple paths 00:08:42.568 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:42.568 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.568 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:42.568 [2024-12-09 03:59:10.985462] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:42.568 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.568 03:59:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:43.136 03:59:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:43.736 03:59:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:43.736 03:59:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:43.736 03:59:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:43.736 03:59:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:43.736 03:59:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:46.263 03:59:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:46.263 03:59:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:46.263 03:59:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:46.263 03:59:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:46.263 03:59:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:46.263 03:59:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:46.263 03:59:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:46.263 [global] 00:08:46.263 thread=1 00:08:46.263 invalidate=1 00:08:46.263 rw=write 00:08:46.263 time_based=1 00:08:46.263 runtime=1 00:08:46.263 ioengine=libaio 00:08:46.263 direct=1 00:08:46.263 bs=4096 00:08:46.263 iodepth=1 00:08:46.263 norandommap=0 00:08:46.263 numjobs=1 00:08:46.263 00:08:46.263 verify_dump=1 00:08:46.263 verify_backlog=512 00:08:46.263 verify_state_save=0 00:08:46.263 do_verify=1 00:08:46.263 verify=crc32c-intel 00:08:46.263 [job0] 00:08:46.263 filename=/dev/nvme0n1 00:08:46.263 Could not set queue depth (nvme0n1) 00:08:46.263 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:46.263 fio-3.35 00:08:46.263 Starting 1 thread 00:08:47.635 00:08:47.635 job0: (groupid=0, jobs=1): err= 0: pid=155428: Mon Dec 9 03:59:15 2024 00:08:47.635 read: IOPS=55, BW=222KiB/s (228kB/s)(228KiB/1026msec) 00:08:47.635 slat (nsec): min=5795, max=33405, avg=15533.07, stdev=9950.79 00:08:47.635 clat (usec): min=194, max=41049, avg=15973.19, stdev=19987.53 00:08:47.635 lat (usec): min=199, max=41065, avg=15988.73, stdev=19994.29 00:08:47.635 clat percentiles (usec): 00:08:47.635 | 1.00th=[ 194], 5.00th=[ 215], 10.00th=[ 237], 20.00th=[ 262], 00:08:47.635 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 297], 60.00th=[ 326], 00:08:47.635 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:47.635 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:47.635 | 99.99th=[41157] 00:08:47.635 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:08:47.635 slat (usec): min=7, max=27186, avg=65.49, stdev=1200.95 00:08:47.635 clat (usec): min=127, max=230, avg=154.41, stdev=14.17 00:08:47.635 lat (usec): min=137, max=27398, avg=219.90, stdev=1203.61 00:08:47.635 clat percentiles (usec): 00:08:47.635 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 143], 00:08:47.635 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 155], 00:08:47.635 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 174], 95.00th=[ 178], 00:08:47.635 | 99.00th=[ 198], 99.50th=[ 212], 99.90th=[ 231], 99.95th=[ 231], 00:08:47.635 | 99.99th=[ 231] 00:08:47.635 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:08:47.635 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:47.635 lat (usec) : 250=91.92%, 500=4.22% 00:08:47.635 lat (msec) : 50=3.87% 00:08:47.635 cpu : usr=0.68%, sys=0.68%, ctx=571, majf=0, minf=1 00:08:47.635 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:47.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:47.635 issued rwts: total=57,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:47.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:47.635 00:08:47.635 Run status group 0 (all jobs): 00:08:47.635 READ: bw=222KiB/s (228kB/s), 222KiB/s-222KiB/s (228kB/s-228kB/s), io=228KiB (233kB), run=1026-1026msec 00:08:47.635 WRITE: bw=1996KiB/s (2044kB/s), 1996KiB/s-1996KiB/s (2044kB/s-2044kB/s), io=2048KiB (2097kB), run=1026-1026msec 00:08:47.635 00:08:47.635 Disk stats (read/write): 00:08:47.635 nvme0n1: ios=105/512, merge=0/0, ticks=972/79, in_queue=1051, util=98.60% 00:08:47.635 03:59:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:47.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:47.635 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:47.635 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:47.635 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:47.635 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.635 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:47.635 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:47.635 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:47.635 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:47.636 rmmod nvme_tcp 00:08:47.636 rmmod nvme_fabrics 00:08:47.636 rmmod nvme_keyring 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 154906 ']' 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 154906 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 154906 ']' 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 154906 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 154906 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 154906' 00:08:47.636 killing process with pid 154906 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 154906 00:08:47.636 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 154906 00:08:47.894 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:47.894 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:47.894 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:47.894 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:47.894 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:47.894 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:47.894 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:47.894 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:47.894 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:47.894 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.894 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.894 03:59:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:50.436 00:08:50.436 real 0m10.439s 00:08:50.436 user 0m23.688s 00:08:50.436 sys 0m2.775s 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.436 ************************************ 00:08:50.436 END TEST nvmf_nmic 00:08:50.436 ************************************ 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:50.436 ************************************ 00:08:50.436 START TEST nvmf_fio_target 00:08:50.436 ************************************ 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:50.436 * Looking for test storage... 00:08:50.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:50.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.436 --rc genhtml_branch_coverage=1 00:08:50.436 --rc genhtml_function_coverage=1 00:08:50.436 --rc genhtml_legend=1 00:08:50.436 --rc geninfo_all_blocks=1 00:08:50.436 --rc geninfo_unexecuted_blocks=1 00:08:50.436 00:08:50.436 ' 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:50.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.436 --rc genhtml_branch_coverage=1 00:08:50.436 --rc genhtml_function_coverage=1 00:08:50.436 --rc genhtml_legend=1 00:08:50.436 --rc geninfo_all_blocks=1 00:08:50.436 --rc geninfo_unexecuted_blocks=1 00:08:50.436 00:08:50.436 ' 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:50.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.436 --rc genhtml_branch_coverage=1 00:08:50.436 --rc genhtml_function_coverage=1 00:08:50.436 --rc genhtml_legend=1 00:08:50.436 --rc geninfo_all_blocks=1 00:08:50.436 --rc geninfo_unexecuted_blocks=1 00:08:50.436 00:08:50.436 ' 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:50.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:50.436 --rc genhtml_branch_coverage=1 00:08:50.436 --rc genhtml_function_coverage=1 00:08:50.436 --rc genhtml_legend=1 00:08:50.436 --rc geninfo_all_blocks=1 00:08:50.436 --rc geninfo_unexecuted_blocks=1 00:08:50.436 00:08:50.436 ' 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.436 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:50.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:08:50.437 03:59:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:52.343 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:52.343 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:52.343 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:52.343 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:52.343 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:52.344 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:52.344 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:52.344 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.344 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.344 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.344 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:52.344 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.344 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.344 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:52.344 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:52.344 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.344 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.344 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:52.603 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:52.603 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.603 03:59:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.603 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.603 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.603 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:52.603 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.603 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.603 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.603 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:52.603 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:52.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:08:52.603 00:08:52.603 --- 10.0.0.2 ping statistics --- 00:08:52.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.603 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:08:52.603 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:08:52.603 00:08:52.603 --- 10.0.0.1 ping statistics --- 00:08:52.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.603 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:08:52.603 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.603 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:08:52.603 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:52.603 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.603 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:52.603 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:52.603 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.603 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:52.603 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:52.862 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:52.862 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:52.862 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:52.862 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:52.862 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=157643 00:08:52.862 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:52.862 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 157643 00:08:52.862 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 157643 ']' 00:08:52.862 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.862 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.862 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.862 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.862 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:52.862 [2024-12-09 03:59:21.244033] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:08:52.862 [2024-12-09 03:59:21.244110] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.862 [2024-12-09 03:59:21.313725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.862 [2024-12-09 03:59:21.367385] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.862 [2024-12-09 03:59:21.367445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.862 [2024-12-09 03:59:21.367458] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.862 [2024-12-09 03:59:21.367469] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.862 [2024-12-09 03:59:21.367479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.862 [2024-12-09 03:59:21.369059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.862 [2024-12-09 03:59:21.369167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.862 [2024-12-09 03:59:21.369258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.862 [2024-12-09 03:59:21.369261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.120 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.120 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:53.120 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:53.120 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:53.120 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:53.120 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.120 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:53.376 [2024-12-09 03:59:21.811212] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.376 03:59:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:53.634 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:53.634 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:53.895 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:53.895 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.153 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:54.153 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.719 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:54.719 03:59:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:54.719 03:59:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:54.977 03:59:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:54.977 03:59:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:55.543 03:59:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:55.543 03:59:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:55.543 03:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:55.543 03:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:56.108 03:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:56.108 03:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:56.108 03:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:56.367 03:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:56.367 03:59:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:56.625 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.881 [2024-12-09 03:59:25.427402] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.881 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:57.447 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:57.447 03:59:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:58.381 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:58.381 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:58.381 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:58.381 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:58.381 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:58.381 03:59:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:00.280 03:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:00.280 03:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:00.280 03:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:00.280 03:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:00.280 03:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:00.280 03:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:00.280 03:59:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:00.280 [global] 00:09:00.280 thread=1 00:09:00.280 invalidate=1 00:09:00.280 rw=write 00:09:00.280 time_based=1 00:09:00.280 runtime=1 00:09:00.280 ioengine=libaio 00:09:00.280 direct=1 00:09:00.280 bs=4096 00:09:00.280 iodepth=1 00:09:00.280 norandommap=0 00:09:00.280 numjobs=1 00:09:00.280 00:09:00.280 verify_dump=1 00:09:00.280 verify_backlog=512 00:09:00.280 verify_state_save=0 00:09:00.280 do_verify=1 00:09:00.280 verify=crc32c-intel 00:09:00.280 [job0] 00:09:00.280 filename=/dev/nvme0n1 00:09:00.280 [job1] 00:09:00.280 filename=/dev/nvme0n2 00:09:00.280 [job2] 00:09:00.280 filename=/dev/nvme0n3 00:09:00.280 [job3] 00:09:00.280 filename=/dev/nvme0n4 00:09:00.280 Could not set queue depth (nvme0n1) 00:09:00.280 Could not set queue depth (nvme0n2) 00:09:00.280 Could not set queue depth (nvme0n3) 00:09:00.280 Could not set queue depth (nvme0n4) 00:09:00.538 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:00.538 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:00.538 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:00.538 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:00.538 fio-3.35 00:09:00.538 Starting 4 threads 00:09:01.911 00:09:01.911 job0: (groupid=0, jobs=1): err= 0: pid=158711: Mon Dec 9 03:59:30 2024 00:09:01.911 read: IOPS=995, BW=3981KiB/s (4076kB/s)(4084KiB/1026msec) 00:09:01.911 slat (nsec): min=5688, max=66165, avg=11050.41, stdev=6681.32 00:09:01.911 clat (usec): min=175, max=42218, avg=778.36, stdev=4750.18 00:09:01.911 lat (usec): min=181, max=42236, avg=789.41, stdev=4751.90 00:09:01.911 clat percentiles (usec): 00:09:01.911 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:09:01.911 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 223], 00:09:01.911 | 70.00th=[ 229], 80.00th=[ 237], 90.00th=[ 249], 95.00th=[ 273], 00:09:01.911 | 99.00th=[40633], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:01.911 | 99.99th=[42206] 00:09:01.911 write: IOPS=998, BW=3992KiB/s (4088kB/s)(4096KiB/1026msec); 0 zone resets 00:09:01.911 slat (nsec): min=7428, max=54759, avg=12148.91, stdev=6946.10 00:09:01.911 clat (usec): min=134, max=314, avg=194.89, stdev=31.91 00:09:01.911 lat (usec): min=142, max=325, avg=207.04, stdev=34.23 00:09:01.911 clat percentiles (usec): 00:09:01.911 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 163], 00:09:01.911 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 194], 60.00th=[ 202], 00:09:01.911 | 70.00th=[ 212], 80.00th=[ 225], 90.00th=[ 239], 95.00th=[ 247], 00:09:01.911 | 99.00th=[ 262], 99.50th=[ 265], 99.90th=[ 273], 99.95th=[ 314], 00:09:01.911 | 99.99th=[ 314] 00:09:01.911 bw ( KiB/s): min= 8192, max= 8192, per=59.20%, avg=8192.00, stdev= 0.00, samples=1 00:09:01.911 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:01.911 lat (usec) : 250=93.15%, 500=6.06%, 750=0.10% 00:09:01.911 lat (msec) : 50=0.68% 00:09:01.911 cpu : usr=1.66%, sys=3.12%, ctx=2047, majf=0, minf=1 00:09:01.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.911 issued rwts: total=1021,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.911 job1: (groupid=0, jobs=1): err= 0: pid=158713: Mon Dec 9 03:59:30 2024 00:09:01.911 read: IOPS=276, BW=1107KiB/s (1134kB/s)(1124KiB/1015msec) 00:09:01.911 slat (nsec): min=6635, max=34368, avg=11571.07, stdev=5764.34 00:09:01.911 clat (usec): min=206, max=42959, avg=3265.30, stdev=10547.38 00:09:01.911 lat (usec): min=216, max=42979, avg=3276.87, stdev=10551.01 00:09:01.911 clat percentiles (usec): 00:09:01.911 | 1.00th=[ 208], 5.00th=[ 219], 10.00th=[ 235], 20.00th=[ 247], 00:09:01.911 | 30.00th=[ 253], 40.00th=[ 273], 50.00th=[ 289], 60.00th=[ 424], 00:09:01.911 | 70.00th=[ 465], 80.00th=[ 506], 90.00th=[ 635], 95.00th=[41157], 00:09:01.911 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:09:01.911 | 99.99th=[42730] 00:09:01.911 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:09:01.911 slat (nsec): min=5727, max=45693, avg=11611.37, stdev=5538.97 00:09:01.911 clat (usec): min=137, max=304, avg=165.91, stdev=14.23 00:09:01.911 lat (usec): min=144, max=311, avg=177.52, stdev=15.77 00:09:01.911 clat percentiles (usec): 00:09:01.911 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:09:01.911 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 167], 00:09:01.911 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 188], 00:09:01.911 | 99.00th=[ 200], 99.50th=[ 221], 99.90th=[ 306], 99.95th=[ 306], 00:09:01.911 | 99.99th=[ 306] 00:09:01.911 bw ( KiB/s): min= 4096, max= 4096, per=29.60%, avg=4096.00, stdev= 0.00, samples=1 00:09:01.911 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:01.911 lat (usec) : 250=73.77%, 500=18.28%, 750=5.30%, 1000=0.13% 00:09:01.911 lat (msec) : 50=2.52% 00:09:01.911 cpu : usr=0.79%, sys=0.69%, ctx=793, majf=0, minf=1 00:09:01.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.911 issued rwts: total=281,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.911 job2: (groupid=0, jobs=1): err= 0: pid=158716: Mon Dec 9 03:59:30 2024 00:09:01.911 read: IOPS=265, BW=1064KiB/s (1089kB/s)(1084KiB/1019msec) 00:09:01.911 slat (nsec): min=7641, max=43305, avg=12400.84, stdev=6770.97 00:09:01.911 clat (usec): min=223, max=42115, avg=3294.18, stdev=10480.84 00:09:01.911 lat (usec): min=233, max=42135, avg=3306.58, stdev=10482.62 00:09:01.911 clat percentiles (usec): 00:09:01.911 | 1.00th=[ 227], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 245], 00:09:01.911 | 30.00th=[ 262], 40.00th=[ 334], 50.00th=[ 429], 60.00th=[ 457], 00:09:01.911 | 70.00th=[ 478], 80.00th=[ 502], 90.00th=[ 562], 95.00th=[41157], 00:09:01.911 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:01.911 | 99.99th=[42206] 00:09:01.911 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:09:01.911 slat (nsec): min=6763, max=61035, avg=13019.91, stdev=6172.62 00:09:01.911 clat (usec): min=172, max=326, avg=220.02, stdev=22.12 00:09:01.911 lat (usec): min=181, max=373, avg=233.04, stdev=22.27 00:09:01.911 clat percentiles (usec): 00:09:01.911 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 200], 00:09:01.911 | 30.00th=[ 206], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 225], 00:09:01.911 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 258], 00:09:01.911 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 326], 99.95th=[ 326], 00:09:01.911 | 99.99th=[ 326] 00:09:01.911 bw ( KiB/s): min= 4096, max= 4096, per=29.60%, avg=4096.00, stdev= 0.00, samples=1 00:09:01.911 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:01.911 lat (usec) : 250=67.43%, 500=25.16%, 750=4.73%, 1000=0.13% 00:09:01.911 lat (msec) : 20=0.13%, 50=2.43% 00:09:01.911 cpu : usr=0.79%, sys=0.88%, ctx=786, majf=0, minf=1 00:09:01.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.911 issued rwts: total=271,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.911 job3: (groupid=0, jobs=1): err= 0: pid=158717: Mon Dec 9 03:59:30 2024 00:09:01.911 read: IOPS=1000, BW=4000KiB/s (4096kB/s)(4144KiB/1036msec) 00:09:01.911 slat (nsec): min=5750, max=38965, avg=10236.67, stdev=5467.23 00:09:01.911 clat (usec): min=194, max=41395, avg=661.58, stdev=4179.87 00:09:01.911 lat (usec): min=200, max=41413, avg=671.82, stdev=4182.04 00:09:01.911 clat percentiles (usec): 00:09:01.911 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 210], 00:09:01.911 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 231], 00:09:01.911 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 255], 95.00th=[ 265], 00:09:01.911 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:01.911 | 99.99th=[41157] 00:09:01.911 write: IOPS=1482, BW=5931KiB/s (6073kB/s)(6144KiB/1036msec); 0 zone resets 00:09:01.911 slat (nsec): min=7145, max=58707, avg=13138.10, stdev=7056.42 00:09:01.912 clat (usec): min=150, max=788, avg=202.06, stdev=49.63 00:09:01.912 lat (usec): min=158, max=808, avg=215.20, stdev=53.20 00:09:01.912 clat percentiles (usec): 00:09:01.912 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 172], 00:09:01.912 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 196], 00:09:01.912 | 70.00th=[ 204], 80.00th=[ 223], 90.00th=[ 262], 95.00th=[ 285], 00:09:01.912 | 99.00th=[ 392], 99.50th=[ 437], 99.90th=[ 775], 99.95th=[ 791], 00:09:01.912 | 99.99th=[ 791] 00:09:01.912 bw ( KiB/s): min= 3176, max= 9112, per=44.40%, avg=6144.00, stdev=4197.39, samples=2 00:09:01.912 iops : min= 794, max= 2278, avg=1536.00, stdev=1049.35, samples=2 00:09:01.912 lat (usec) : 250=85.69%, 500=13.72%, 750=0.08%, 1000=0.08% 00:09:01.912 lat (msec) : 50=0.43% 00:09:01.912 cpu : usr=2.22%, sys=3.86%, ctx=2572, majf=0, minf=2 00:09:01.912 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:01.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:01.912 issued rwts: total=1036,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:01.912 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:01.912 00:09:01.912 Run status group 0 (all jobs): 00:09:01.912 READ: bw=9.84MiB/s (10.3MB/s), 1064KiB/s-4000KiB/s (1089kB/s-4096kB/s), io=10.2MiB (10.7MB), run=1015-1036msec 00:09:01.912 WRITE: bw=13.5MiB/s (14.2MB/s), 2010KiB/s-5931KiB/s (2058kB/s-6073kB/s), io=14.0MiB (14.7MB), run=1015-1036msec 00:09:01.912 00:09:01.912 Disk stats (read/write): 00:09:01.912 nvme0n1: ios=1065/1024, merge=0/0, ticks=1462/193, in_queue=1655, util=97.49% 00:09:01.912 nvme0n2: ios=296/512, merge=0/0, ticks=735/82, in_queue=817, util=86.53% 00:09:01.912 nvme0n3: ios=283/512, merge=0/0, ticks=1641/107, in_queue=1748, util=97.58% 00:09:01.912 nvme0n4: ios=1031/1536, merge=0/0, ticks=470/291, in_queue=761, util=89.50% 00:09:01.912 03:59:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:01.912 [global] 00:09:01.912 thread=1 00:09:01.912 invalidate=1 00:09:01.912 rw=randwrite 00:09:01.912 time_based=1 00:09:01.912 runtime=1 00:09:01.912 ioengine=libaio 00:09:01.912 direct=1 00:09:01.912 bs=4096 00:09:01.912 iodepth=1 00:09:01.912 norandommap=0 00:09:01.912 numjobs=1 00:09:01.912 00:09:01.912 verify_dump=1 00:09:01.912 verify_backlog=512 00:09:01.912 verify_state_save=0 00:09:01.912 do_verify=1 00:09:01.912 verify=crc32c-intel 00:09:01.912 [job0] 00:09:01.912 filename=/dev/nvme0n1 00:09:01.912 [job1] 00:09:01.912 filename=/dev/nvme0n2 00:09:01.912 [job2] 00:09:01.912 filename=/dev/nvme0n3 00:09:01.912 [job3] 00:09:01.912 filename=/dev/nvme0n4 00:09:01.912 Could not set queue depth (nvme0n1) 00:09:01.912 Could not set queue depth (nvme0n2) 00:09:01.912 Could not set queue depth (nvme0n3) 00:09:01.912 Could not set queue depth (nvme0n4) 00:09:01.912 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.912 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.912 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.912 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:01.912 fio-3.35 00:09:01.912 Starting 4 threads 00:09:03.283 00:09:03.283 job0: (groupid=0, jobs=1): err= 0: pid=158949: Mon Dec 9 03:59:31 2024 00:09:03.283 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:03.283 slat (nsec): min=4600, max=51148, avg=9434.67, stdev=3719.86 00:09:03.283 clat (usec): min=174, max=42052, avg=421.31, stdev=2969.52 00:09:03.283 lat (usec): min=180, max=42065, avg=430.75, stdev=2970.62 00:09:03.283 clat percentiles (usec): 00:09:03.283 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:09:03.283 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 208], 00:09:03.283 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 225], 95.00th=[ 233], 00:09:03.283 | 99.00th=[ 297], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:09:03.283 | 99.99th=[42206] 00:09:03.283 write: IOPS=1865, BW=7461KiB/s (7640kB/s)(7468KiB/1001msec); 0 zone resets 00:09:03.283 slat (nsec): min=5953, max=50753, avg=11334.37, stdev=5044.97 00:09:03.283 clat (usec): min=135, max=312, avg=164.69, stdev=21.17 00:09:03.283 lat (usec): min=141, max=323, avg=176.02, stdev=21.75 00:09:03.283 clat percentiles (usec): 00:09:03.283 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:09:03.283 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:09:03.283 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 182], 95.00th=[ 190], 00:09:03.283 | 99.00th=[ 285], 99.50th=[ 289], 99.90th=[ 306], 99.95th=[ 314], 00:09:03.283 | 99.99th=[ 314] 00:09:03.283 bw ( KiB/s): min= 8192, max= 8192, per=42.75%, avg=8192.00, stdev= 0.00, samples=1 00:09:03.283 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:03.283 lat (usec) : 250=97.97%, 500=1.76%, 750=0.03% 00:09:03.283 lat (msec) : 50=0.24% 00:09:03.283 cpu : usr=1.80%, sys=3.80%, ctx=3404, majf=0, minf=1 00:09:03.283 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:03.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.283 issued rwts: total=1536,1867,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.283 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:03.283 job1: (groupid=0, jobs=1): err= 0: pid=158953: Mon Dec 9 03:59:31 2024 00:09:03.283 read: IOPS=1202, BW=4811KiB/s (4927kB/s)(4816KiB/1001msec) 00:09:03.283 slat (nsec): min=4733, max=63137, avg=11742.82, stdev=5613.68 00:09:03.283 clat (usec): min=196, max=41181, avg=517.88, stdev=3313.99 00:09:03.283 lat (usec): min=203, max=41197, avg=529.63, stdev=3313.98 00:09:03.283 clat percentiles (usec): 00:09:03.283 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 223], 00:09:03.283 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 251], 00:09:03.283 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 289], 00:09:03.283 | 99.00th=[ 523], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:03.283 | 99.99th=[41157] 00:09:03.283 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:09:03.283 slat (nsec): min=6110, max=69517, avg=16686.39, stdev=8551.94 00:09:03.283 clat (usec): min=150, max=415, avg=211.48, stdev=57.80 00:09:03.283 lat (usec): min=159, max=469, avg=228.16, stdev=62.52 00:09:03.283 clat percentiles (usec): 00:09:03.283 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:09:03.283 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 198], 00:09:03.283 | 70.00th=[ 210], 80.00th=[ 239], 90.00th=[ 310], 95.00th=[ 355], 00:09:03.283 | 99.00th=[ 388], 99.50th=[ 396], 99.90th=[ 412], 99.95th=[ 416], 00:09:03.283 | 99.99th=[ 416] 00:09:03.283 bw ( KiB/s): min= 4440, max= 4440, per=23.17%, avg=4440.00, stdev= 0.00, samples=1 00:09:03.283 iops : min= 1110, max= 1110, avg=1110.00, stdev= 0.00, samples=1 00:09:03.283 lat (usec) : 250=72.04%, 500=27.48%, 750=0.15% 00:09:03.283 lat (msec) : 2=0.04%, 50=0.29% 00:09:03.283 cpu : usr=2.50%, sys=5.20%, ctx=2740, majf=0, minf=1 00:09:03.283 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:03.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.283 issued rwts: total=1204,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.283 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:03.283 job2: (groupid=0, jobs=1): err= 0: pid=158954: Mon Dec 9 03:59:31 2024 00:09:03.283 read: IOPS=33, BW=134KiB/s (137kB/s)(136KiB/1013msec) 00:09:03.283 slat (nsec): min=10237, max=35794, avg=21417.71, stdev=6894.57 00:09:03.283 clat (usec): min=249, max=42025, avg=26731.12, stdev=19780.35 00:09:03.283 lat (usec): min=266, max=42043, avg=26752.54, stdev=19778.38 00:09:03.283 clat percentiles (usec): 00:09:03.283 | 1.00th=[ 249], 5.00th=[ 265], 10.00th=[ 289], 20.00th=[ 371], 00:09:03.283 | 30.00th=[ 424], 40.00th=[40633], 50.00th=[40633], 60.00th=[41157], 00:09:03.283 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:09:03.283 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:03.283 | 99.99th=[42206] 00:09:03.283 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:09:03.283 slat (nsec): min=7921, max=28790, avg=9896.13, stdev=2829.22 00:09:03.283 clat (usec): min=153, max=392, avg=187.09, stdev=16.75 00:09:03.283 lat (usec): min=162, max=401, avg=196.99, stdev=17.38 00:09:03.283 clat percentiles (usec): 00:09:03.283 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:09:03.283 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 188], 00:09:03.283 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 215], 00:09:03.283 | 99.00th=[ 227], 99.50th=[ 235], 99.90th=[ 392], 99.95th=[ 392], 00:09:03.283 | 99.99th=[ 392] 00:09:03.283 bw ( KiB/s): min= 4096, max= 4096, per=21.38%, avg=4096.00, stdev= 0.00, samples=1 00:09:03.283 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:03.283 lat (usec) : 250=93.59%, 500=2.20%, 750=0.18% 00:09:03.283 lat (msec) : 50=4.03% 00:09:03.283 cpu : usr=0.59%, sys=0.40%, ctx=548, majf=0, minf=1 00:09:03.283 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:03.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.283 issued rwts: total=34,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.283 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:03.283 job3: (groupid=0, jobs=1): err= 0: pid=158955: Mon Dec 9 03:59:31 2024 00:09:03.283 read: IOPS=838, BW=3352KiB/s (3433kB/s)(3456KiB/1031msec) 00:09:03.283 slat (nsec): min=7236, max=62924, avg=13670.91, stdev=6250.63 00:09:03.283 clat (usec): min=211, max=41335, avg=925.93, stdev=5144.31 00:09:03.283 lat (usec): min=219, max=41355, avg=939.60, stdev=5144.52 00:09:03.283 clat percentiles (usec): 00:09:03.283 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 237], 20.00th=[ 243], 00:09:03.283 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:09:03.283 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[ 388], 00:09:03.283 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:03.283 | 99.99th=[41157] 00:09:03.283 write: IOPS=993, BW=3973KiB/s (4068kB/s)(4096KiB/1031msec); 0 zone resets 00:09:03.283 slat (nsec): min=7976, max=66119, avg=15249.46, stdev=7260.53 00:09:03.283 clat (usec): min=144, max=414, avg=190.15, stdev=25.35 00:09:03.283 lat (usec): min=155, max=440, avg=205.40, stdev=25.78 00:09:03.283 clat percentiles (usec): 00:09:03.283 | 1.00th=[ 153], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:09:03.283 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:09:03.283 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 229], 00:09:03.283 | 99.00th=[ 297], 99.50th=[ 318], 99.90th=[ 404], 99.95th=[ 416], 00:09:03.283 | 99.99th=[ 416] 00:09:03.283 bw ( KiB/s): min= 8192, max= 8192, per=42.75%, avg=8192.00, stdev= 0.00, samples=1 00:09:03.283 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:03.283 lat (usec) : 250=69.44%, 500=29.18%, 750=0.64% 00:09:03.283 lat (msec) : 50=0.74% 00:09:03.283 cpu : usr=1.84%, sys=3.50%, ctx=1889, majf=0, minf=1 00:09:03.283 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:03.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.283 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:03.283 issued rwts: total=864,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:03.283 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:03.283 00:09:03.283 Run status group 0 (all jobs): 00:09:03.283 READ: bw=13.8MiB/s (14.5MB/s), 134KiB/s-6138KiB/s (137kB/s-6285kB/s), io=14.2MiB (14.9MB), run=1001-1031msec 00:09:03.283 WRITE: bw=18.7MiB/s (19.6MB/s), 2022KiB/s-7461KiB/s (2070kB/s-7640kB/s), io=19.3MiB (20.2MB), run=1001-1031msec 00:09:03.283 00:09:03.283 Disk stats (read/write): 00:09:03.283 nvme0n1: ios=1160/1536, merge=0/0, ticks=1108/257, in_queue=1365, util=97.39% 00:09:03.283 nvme0n2: ios=1024/1234, merge=0/0, ticks=521/261, in_queue=782, util=86.48% 00:09:03.283 nvme0n3: ios=53/512, merge=0/0, ticks=1735/90, in_queue=1825, util=97.91% 00:09:03.283 nvme0n4: ios=737/1024, merge=0/0, ticks=1536/188, in_queue=1724, util=97.68% 00:09:03.283 03:59:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:03.283 [global] 00:09:03.283 thread=1 00:09:03.283 invalidate=1 00:09:03.283 rw=write 00:09:03.283 time_based=1 00:09:03.283 runtime=1 00:09:03.283 ioengine=libaio 00:09:03.283 direct=1 00:09:03.283 bs=4096 00:09:03.283 iodepth=128 00:09:03.283 norandommap=0 00:09:03.283 numjobs=1 00:09:03.283 00:09:03.283 verify_dump=1 00:09:03.283 verify_backlog=512 00:09:03.283 verify_state_save=0 00:09:03.283 do_verify=1 00:09:03.283 verify=crc32c-intel 00:09:03.283 [job0] 00:09:03.283 filename=/dev/nvme0n1 00:09:03.284 [job1] 00:09:03.284 filename=/dev/nvme0n2 00:09:03.284 [job2] 00:09:03.284 filename=/dev/nvme0n3 00:09:03.284 [job3] 00:09:03.284 filename=/dev/nvme0n4 00:09:03.284 Could not set queue depth (nvme0n1) 00:09:03.284 Could not set queue depth (nvme0n2) 00:09:03.284 Could not set queue depth (nvme0n3) 00:09:03.284 Could not set queue depth (nvme0n4) 00:09:03.540 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:03.540 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:03.540 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:03.540 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:03.540 fio-3.35 00:09:03.540 Starting 4 threads 00:09:04.919 00:09:04.919 job0: (groupid=0, jobs=1): err= 0: pid=159273: Mon Dec 9 03:59:33 2024 00:09:04.919 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec) 00:09:04.919 slat (usec): min=2, max=16296, avg=177.44, stdev=1105.48 00:09:04.919 clat (usec): min=6439, max=55123, avg=23079.68, stdev=9680.74 00:09:04.919 lat (usec): min=6445, max=55162, avg=23257.13, stdev=9789.63 00:09:04.919 clat percentiles (usec): 00:09:04.919 | 1.00th=[ 9372], 5.00th=[11338], 10.00th=[13960], 20.00th=[16450], 00:09:04.919 | 30.00th=[17171], 40.00th=[17957], 50.00th=[19530], 60.00th=[20841], 00:09:04.919 | 70.00th=[23462], 80.00th=[32375], 90.00th=[40109], 95.00th=[43779], 00:09:04.919 | 99.00th=[47449], 99.50th=[47973], 99.90th=[52691], 99.95th=[54264], 00:09:04.919 | 99.99th=[55313] 00:09:04.919 write: IOPS=3082, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1003msec); 0 zone resets 00:09:04.919 slat (usec): min=3, max=14458, avg=135.82, stdev=835.11 00:09:04.919 clat (usec): min=2751, max=48305, avg=18010.95, stdev=7180.76 00:09:04.919 lat (usec): min=5729, max=48342, avg=18146.77, stdev=7245.69 00:09:04.919 clat percentiles (usec): 00:09:04.919 | 1.00th=[ 6718], 5.00th=[ 8848], 10.00th=[10683], 20.00th=[13173], 00:09:04.919 | 30.00th=[15008], 40.00th=[15401], 50.00th=[16057], 60.00th=[17171], 00:09:04.919 | 70.00th=[19268], 80.00th=[22152], 90.00th=[28181], 95.00th=[33817], 00:09:04.919 | 99.00th=[41157], 99.50th=[41681], 99.90th=[43254], 99.95th=[45351], 00:09:04.919 | 99.99th=[48497] 00:09:04.919 bw ( KiB/s): min= 8200, max=16376, per=19.13%, avg=12288.00, stdev=5781.31, samples=2 00:09:04.919 iops : min= 2050, max= 4094, avg=3072.00, stdev=1445.33, samples=2 00:09:04.919 lat (msec) : 4=0.02%, 10=4.59%, 20=57.97%, 50=37.31%, 100=0.11% 00:09:04.919 cpu : usr=3.39%, sys=6.18%, ctx=223, majf=0, minf=1 00:09:04.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:09:04.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.919 issued rwts: total=3072,3092,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.919 job1: (groupid=0, jobs=1): err= 0: pid=159293: Mon Dec 9 03:59:33 2024 00:09:04.919 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:09:04.919 slat (usec): min=3, max=8718, avg=117.86, stdev=635.66 00:09:04.919 clat (usec): min=7958, max=35708, avg=15265.76, stdev=3696.31 00:09:04.919 lat (usec): min=8342, max=35713, avg=15383.62, stdev=3758.73 00:09:04.919 clat percentiles (usec): 00:09:04.919 | 1.00th=[ 8455], 5.00th=[10814], 10.00th=[11469], 20.00th=[11994], 00:09:04.919 | 30.00th=[13173], 40.00th=[14091], 50.00th=[15008], 60.00th=[15664], 00:09:04.919 | 70.00th=[16450], 80.00th=[17695], 90.00th=[19268], 95.00th=[21627], 00:09:04.919 | 99.00th=[29492], 99.50th=[31589], 99.90th=[35914], 99.95th=[35914], 00:09:04.919 | 99.99th=[35914] 00:09:04.919 write: IOPS=3685, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1007msec); 0 zone resets 00:09:04.919 slat (usec): min=4, max=8397, avg=143.87, stdev=639.46 00:09:04.919 clat (usec): min=4735, max=48787, avg=19614.08, stdev=8899.55 00:09:04.919 lat (usec): min=5455, max=48801, avg=19757.95, stdev=8961.75 00:09:04.919 clat percentiles (usec): 00:09:04.919 | 1.00th=[ 9372], 5.00th=[10945], 10.00th=[11207], 20.00th=[11863], 00:09:04.919 | 30.00th=[12387], 40.00th=[14353], 50.00th=[15270], 60.00th=[20841], 00:09:04.919 | 70.00th=[23987], 80.00th=[28181], 90.00th=[33424], 95.00th=[38011], 00:09:04.919 | 99.00th=[41157], 99.50th=[43779], 99.90th=[49021], 99.95th=[49021], 00:09:04.919 | 99.99th=[49021] 00:09:04.919 bw ( KiB/s): min=12344, max=16384, per=22.36%, avg=14364.00, stdev=2856.71, samples=2 00:09:04.919 iops : min= 3086, max= 4096, avg=3591.00, stdev=714.18, samples=2 00:09:04.919 lat (msec) : 10=2.34%, 20=73.06%, 50=24.59% 00:09:04.919 cpu : usr=4.97%, sys=9.94%, ctx=389, majf=0, minf=2 00:09:04.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:09:04.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.919 issued rwts: total=3584,3711,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.919 job2: (groupid=0, jobs=1): err= 0: pid=159302: Mon Dec 9 03:59:33 2024 00:09:04.919 read: IOPS=4886, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1003msec) 00:09:04.919 slat (usec): min=3, max=4070, avg=94.53, stdev=478.69 00:09:04.919 clat (usec): min=2243, max=19514, avg=12759.06, stdev=1568.99 00:09:04.919 lat (usec): min=2249, max=20945, avg=12853.59, stdev=1576.75 00:09:04.919 clat percentiles (usec): 00:09:04.919 | 1.00th=[ 6194], 5.00th=[10159], 10.00th=[10945], 20.00th=[12256], 00:09:04.919 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13173], 00:09:04.919 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13960], 95.00th=[14615], 00:09:04.919 | 99.00th=[17171], 99.50th=[17957], 99.90th=[19268], 99.95th=[19530], 00:09:04.919 | 99.99th=[19530] 00:09:04.919 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:09:04.919 slat (usec): min=4, max=6103, avg=92.87, stdev=463.88 00:09:04.919 clat (usec): min=8584, max=19401, avg=12526.41, stdev=1437.80 00:09:04.919 lat (usec): min=8610, max=19415, avg=12619.27, stdev=1442.06 00:09:04.919 clat percentiles (usec): 00:09:04.919 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[11863], 00:09:04.919 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12649], 60.00th=[12780], 00:09:04.919 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13698], 95.00th=[15139], 00:09:04.919 | 99.00th=[16581], 99.50th=[18482], 99.90th=[19268], 99.95th=[19268], 00:09:04.919 | 99.99th=[19530] 00:09:04.919 bw ( KiB/s): min=20480, max=20480, per=31.89%, avg=20480.00, stdev= 0.00, samples=2 00:09:04.919 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:09:04.919 lat (msec) : 4=0.23%, 10=5.41%, 20=94.36% 00:09:04.919 cpu : usr=7.29%, sys=13.07%, ctx=442, majf=0, minf=1 00:09:04.919 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:04.919 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.919 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.919 issued rwts: total=4901,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.919 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.919 job3: (groupid=0, jobs=1): err= 0: pid=159303: Mon Dec 9 03:59:33 2024 00:09:04.919 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:09:04.919 slat (usec): min=2, max=32298, avg=127.10, stdev=1078.68 00:09:04.919 clat (usec): min=4467, max=81539, avg=15366.06, stdev=9035.78 00:09:04.919 lat (usec): min=4474, max=81553, avg=15493.16, stdev=9133.92 00:09:04.919 clat percentiles (usec): 00:09:04.919 | 1.00th=[ 6849], 5.00th=[ 9634], 10.00th=[11731], 20.00th=[12125], 00:09:04.919 | 30.00th=[12256], 40.00th=[12387], 50.00th=[12518], 60.00th=[12649], 00:09:04.919 | 70.00th=[12911], 80.00th=[14877], 90.00th=[21627], 95.00th=[34341], 00:09:04.919 | 99.00th=[62129], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:09:04.919 | 99.99th=[81265] 00:09:04.919 write: IOPS=4263, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1011msec); 0 zone resets 00:09:04.919 slat (usec): min=3, max=26912, avg=103.67, stdev=820.98 00:09:04.919 clat (usec): min=2820, max=78395, avg=14693.62, stdev=8970.95 00:09:04.919 lat (usec): min=2829, max=78407, avg=14797.29, stdev=9061.49 00:09:04.919 clat percentiles (usec): 00:09:04.919 | 1.00th=[ 4178], 5.00th=[ 7767], 10.00th=[ 9634], 20.00th=[11469], 00:09:04.919 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12518], 60.00th=[12649], 00:09:04.919 | 70.00th=[13304], 80.00th=[13435], 90.00th=[23200], 95.00th=[40109], 00:09:04.919 | 99.00th=[51119], 99.50th=[51119], 99.90th=[62129], 99.95th=[66847], 00:09:04.919 | 99.99th=[78119] 00:09:04.920 bw ( KiB/s): min=12880, max=20584, per=26.05%, avg=16732.00, stdev=5447.55, samples=2 00:09:04.920 iops : min= 3220, max= 5146, avg=4183.00, stdev=1361.89, samples=2 00:09:04.920 lat (msec) : 4=0.51%, 10=8.37%, 20=78.61%, 50=10.94%, 100=1.56% 00:09:04.920 cpu : usr=3.56%, sys=6.63%, ctx=438, majf=0, minf=1 00:09:04.920 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:04.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:04.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:04.920 issued rwts: total=4096,4310,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:04.920 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:04.920 00:09:04.920 Run status group 0 (all jobs): 00:09:04.920 READ: bw=60.5MiB/s (63.4MB/s), 12.0MiB/s-19.1MiB/s (12.5MB/s-20.0MB/s), io=61.1MiB (64.1MB), run=1003-1011msec 00:09:04.920 WRITE: bw=62.7MiB/s (65.8MB/s), 12.0MiB/s-19.9MiB/s (12.6MB/s-20.9MB/s), io=63.4MiB (66.5MB), run=1003-1011msec 00:09:04.920 00:09:04.920 Disk stats (read/write): 00:09:04.920 nvme0n1: ios=2580/2791, merge=0/0, ticks=30724/25704, in_queue=56428, util=98.90% 00:09:04.920 nvme0n2: ios=3121/3207, merge=0/0, ticks=22274/28527, in_queue=50801, util=87.70% 00:09:04.920 nvme0n3: ios=4148/4344, merge=0/0, ticks=17500/16480, in_queue=33980, util=98.85% 00:09:04.920 nvme0n4: ios=3249/3584, merge=0/0, ticks=29848/28684, in_queue=58532, util=97.79% 00:09:04.920 03:59:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:04.920 [global] 00:09:04.920 thread=1 00:09:04.920 invalidate=1 00:09:04.920 rw=randwrite 00:09:04.920 time_based=1 00:09:04.920 runtime=1 00:09:04.920 ioengine=libaio 00:09:04.920 direct=1 00:09:04.920 bs=4096 00:09:04.920 iodepth=128 00:09:04.920 norandommap=0 00:09:04.920 numjobs=1 00:09:04.920 00:09:04.920 verify_dump=1 00:09:04.920 verify_backlog=512 00:09:04.920 verify_state_save=0 00:09:04.920 do_verify=1 00:09:04.920 verify=crc32c-intel 00:09:04.920 [job0] 00:09:04.920 filename=/dev/nvme0n1 00:09:04.920 [job1] 00:09:04.920 filename=/dev/nvme0n2 00:09:04.920 [job2] 00:09:04.920 filename=/dev/nvme0n3 00:09:04.920 [job3] 00:09:04.920 filename=/dev/nvme0n4 00:09:04.920 Could not set queue depth (nvme0n1) 00:09:04.920 Could not set queue depth (nvme0n2) 00:09:04.920 Could not set queue depth (nvme0n3) 00:09:04.920 Could not set queue depth (nvme0n4) 00:09:04.920 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:04.920 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:04.920 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:04.920 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:04.920 fio-3.35 00:09:04.920 Starting 4 threads 00:09:06.296 00:09:06.296 job0: (groupid=0, jobs=1): err= 0: pid=159533: Mon Dec 9 03:59:34 2024 00:09:06.296 read: IOPS=4019, BW=15.7MiB/s (16.5MB/s)(16.0MiB/1019msec) 00:09:06.296 slat (usec): min=2, max=11600, avg=102.25, stdev=635.55 00:09:06.296 clat (usec): min=5928, max=36238, avg=12977.05, stdev=3233.51 00:09:06.296 lat (usec): min=5938, max=36245, avg=13079.30, stdev=3290.08 00:09:06.296 clat percentiles (usec): 00:09:06.296 | 1.00th=[ 7373], 5.00th=[10159], 10.00th=[11469], 20.00th=[11994], 00:09:06.296 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:09:06.296 | 70.00th=[12780], 80.00th=[12911], 90.00th=[14484], 95.00th=[18220], 00:09:06.296 | 99.00th=[30016], 99.50th=[32375], 99.90th=[36439], 99.95th=[36439], 00:09:06.296 | 99.99th=[36439] 00:09:06.296 write: IOPS=4410, BW=17.2MiB/s (18.1MB/s)(17.6MiB/1019msec); 0 zone resets 00:09:06.296 slat (usec): min=4, max=10327, avg=118.54, stdev=622.34 00:09:06.296 clat (usec): min=3101, max=63815, avg=16837.99, stdev=11886.55 00:09:06.296 lat (usec): min=3111, max=63821, avg=16956.52, stdev=11954.66 00:09:06.296 clat percentiles (usec): 00:09:06.296 | 1.00th=[ 4686], 5.00th=[ 8094], 10.00th=[10028], 20.00th=[10945], 00:09:06.296 | 30.00th=[11600], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:09:06.296 | 70.00th=[13042], 80.00th=[22676], 90.00th=[24249], 95.00th=[45876], 00:09:06.296 | 99.00th=[63177], 99.50th=[63701], 99.90th=[63701], 99.95th=[63701], 00:09:06.296 | 99.99th=[63701] 00:09:06.296 bw ( KiB/s): min=14024, max=20904, per=25.04%, avg=17464.00, stdev=4864.89, samples=2 00:09:06.296 iops : min= 3506, max= 5226, avg=4366.00, stdev=1216.22, samples=2 00:09:06.296 lat (msec) : 4=0.15%, 10=7.37%, 20=77.96%, 50=11.96%, 100=2.56% 00:09:06.296 cpu : usr=5.11%, sys=8.74%, ctx=398, majf=0, minf=1 00:09:06.296 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:06.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.296 issued rwts: total=4096,4494,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.296 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.296 job1: (groupid=0, jobs=1): err= 0: pid=159534: Mon Dec 9 03:59:34 2024 00:09:06.296 read: IOPS=3087, BW=12.1MiB/s (12.6MB/s)(12.2MiB/1015msec) 00:09:06.296 slat (usec): min=2, max=14100, avg=113.40, stdev=765.17 00:09:06.296 clat (usec): min=5987, max=33309, avg=14064.06, stdev=4265.05 00:09:06.296 lat (usec): min=6008, max=33316, avg=14177.46, stdev=4313.53 00:09:06.296 clat percentiles (usec): 00:09:06.296 | 1.00th=[ 7242], 5.00th=[ 9634], 10.00th=[11076], 20.00th=[11731], 00:09:06.296 | 30.00th=[12256], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:09:06.296 | 70.00th=[13960], 80.00th=[15664], 90.00th=[20317], 95.00th=[23462], 00:09:06.296 | 99.00th=[30540], 99.50th=[31851], 99.90th=[33424], 99.95th=[33424], 00:09:06.296 | 99.99th=[33424] 00:09:06.296 write: IOPS=3531, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1015msec); 0 zone resets 00:09:06.296 slat (usec): min=4, max=15877, avg=170.79, stdev=1016.68 00:09:06.296 clat (msec): min=3, max=117, avg=23.63, stdev=20.46 00:09:06.296 lat (msec): min=3, max=117, avg=23.80, stdev=20.60 00:09:06.296 clat percentiles (msec): 00:09:06.296 | 1.00th=[ 5], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:09:06.296 | 30.00th=[ 13], 40.00th=[ 14], 50.00th=[ 18], 60.00th=[ 19], 00:09:06.296 | 70.00th=[ 23], 80.00th=[ 25], 90.00th=[ 54], 95.00th=[ 62], 00:09:06.296 | 99.00th=[ 113], 99.50th=[ 116], 99.90th=[ 118], 99.95th=[ 118], 00:09:06.296 | 99.99th=[ 118] 00:09:06.296 bw ( KiB/s): min=11136, max=17042, per=20.20%, avg=14089.00, stdev=4176.17, samples=2 00:09:06.296 iops : min= 2784, max= 4260, avg=3522.00, stdev=1043.69, samples=2 00:09:06.296 lat (msec) : 4=0.27%, 10=6.09%, 20=68.65%, 50=19.01%, 100=4.67% 00:09:06.297 lat (msec) : 250=1.31% 00:09:06.297 cpu : usr=3.06%, sys=7.69%, ctx=319, majf=0, minf=1 00:09:06.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:06.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.297 issued rwts: total=3134,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.297 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.297 job2: (groupid=0, jobs=1): err= 0: pid=159535: Mon Dec 9 03:59:34 2024 00:09:06.297 read: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec) 00:09:06.297 slat (usec): min=2, max=12809, avg=116.28, stdev=810.75 00:09:06.297 clat (usec): min=4578, max=27873, avg=14378.66, stdev=3621.19 00:09:06.297 lat (usec): min=4598, max=27888, avg=14494.94, stdev=3670.55 00:09:06.297 clat percentiles (usec): 00:09:06.297 | 1.00th=[ 5997], 5.00th=[10159], 10.00th=[11731], 20.00th=[12125], 00:09:06.297 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13173], 60.00th=[13960], 00:09:06.297 | 70.00th=[15008], 80.00th=[16909], 90.00th=[19530], 95.00th=[22152], 00:09:06.297 | 99.00th=[24773], 99.50th=[26346], 99.90th=[27919], 99.95th=[27919], 00:09:06.297 | 99.99th=[27919] 00:09:06.297 write: IOPS=5012, BW=19.6MiB/s (20.5MB/s)(19.8MiB/1013msec); 0 zone resets 00:09:06.297 slat (usec): min=4, max=10232, avg=81.65, stdev=395.21 00:09:06.297 clat (usec): min=1153, max=26481, avg=12241.08, stdev=2860.71 00:09:06.297 lat (usec): min=1431, max=26501, avg=12322.73, stdev=2896.40 00:09:06.297 clat percentiles (usec): 00:09:06.297 | 1.00th=[ 4015], 5.00th=[ 5735], 10.00th=[ 7963], 20.00th=[11338], 00:09:06.297 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13042], 60.00th=[13173], 00:09:06.297 | 70.00th=[13304], 80.00th=[13566], 90.00th=[14484], 95.00th=[14615], 00:09:06.297 | 99.00th=[21365], 99.50th=[23462], 99.90th=[24511], 99.95th=[26346], 00:09:06.297 | 99.99th=[26608] 00:09:06.297 bw ( KiB/s): min=19136, max=20464, per=28.39%, avg=19800.00, stdev=939.04, samples=2 00:09:06.297 iops : min= 4784, max= 5116, avg=4950.00, stdev=234.76, samples=2 00:09:06.297 lat (msec) : 2=0.07%, 4=0.43%, 10=10.06%, 20=84.37%, 50=5.07% 00:09:06.297 cpu : usr=6.32%, sys=9.39%, ctx=568, majf=0, minf=2 00:09:06.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:06.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.297 issued rwts: total=4608,5078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.297 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.297 job3: (groupid=0, jobs=1): err= 0: pid=159536: Mon Dec 9 03:59:34 2024 00:09:06.297 read: IOPS=4323, BW=16.9MiB/s (17.7MB/s)(17.0MiB/1006msec) 00:09:06.297 slat (usec): min=2, max=13423, avg=107.07, stdev=696.19 00:09:06.297 clat (usec): min=1909, max=27081, avg=13793.14, stdev=2453.33 00:09:06.297 lat (usec): min=5545, max=27093, avg=13900.21, stdev=2488.90 00:09:06.297 clat percentiles (usec): 00:09:06.297 | 1.00th=[ 6849], 5.00th=[11076], 10.00th=[11469], 20.00th=[12387], 00:09:06.297 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13698], 60.00th=[13960], 00:09:06.297 | 70.00th=[14222], 80.00th=[14746], 90.00th=[16188], 95.00th=[18482], 00:09:06.297 | 99.00th=[23462], 99.50th=[25297], 99.90th=[26870], 99.95th=[26870], 00:09:06.297 | 99.99th=[27132] 00:09:06.297 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:09:06.297 slat (usec): min=3, max=29394, avg=107.39, stdev=842.95 00:09:06.297 clat (usec): min=1842, max=64910, avg=14577.40, stdev=6905.82 00:09:06.297 lat (usec): min=1871, max=64931, avg=14684.79, stdev=6961.98 00:09:06.297 clat percentiles (usec): 00:09:06.297 | 1.00th=[ 6063], 5.00th=[ 8586], 10.00th=[10159], 20.00th=[11469], 00:09:06.297 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13173], 60.00th=[13698], 00:09:06.297 | 70.00th=[14091], 80.00th=[14353], 90.00th=[18220], 95.00th=[35390], 00:09:06.297 | 99.00th=[47449], 99.50th=[47449], 99.90th=[47449], 99.95th=[52167], 00:09:06.297 | 99.99th=[64750] 00:09:06.297 bw ( KiB/s): min=16384, max=20480, per=26.43%, avg=18432.00, stdev=2896.31, samples=2 00:09:06.297 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:09:06.297 lat (msec) : 2=0.07%, 10=5.27%, 20=88.94%, 50=5.69%, 100=0.03% 00:09:06.297 cpu : usr=4.88%, sys=5.87%, ctx=388, majf=0, minf=1 00:09:06.297 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:06.297 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:06.297 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:06.297 issued rwts: total=4349,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:06.297 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:06.297 00:09:06.297 Run status group 0 (all jobs): 00:09:06.297 READ: bw=62.1MiB/s (65.1MB/s), 12.1MiB/s-17.8MiB/s (12.6MB/s-18.6MB/s), io=63.2MiB (66.3MB), run=1006-1019msec 00:09:06.297 WRITE: bw=68.1MiB/s (71.4MB/s), 13.8MiB/s-19.6MiB/s (14.5MB/s-20.5MB/s), io=69.4MiB (72.8MB), run=1006-1019msec 00:09:06.297 00:09:06.297 Disk stats (read/write): 00:09:06.297 nvme0n1: ios=3634/3935, merge=0/0, ticks=27722/36413, in_queue=64135, util=86.47% 00:09:06.297 nvme0n2: ios=2728/3072, merge=0/0, ticks=36643/67031, in_queue=103674, util=100.00% 00:09:06.297 nvme0n3: ios=3891/4096, merge=0/0, ticks=54028/48621, in_queue=102649, util=98.12% 00:09:06.297 nvme0n4: ios=3635/3775, merge=0/0, ticks=31004/31472, in_queue=62476, util=97.89% 00:09:06.297 03:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:06.297 03:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=159674 00:09:06.297 03:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:06.297 03:59:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:06.297 [global] 00:09:06.297 thread=1 00:09:06.297 invalidate=1 00:09:06.297 rw=read 00:09:06.297 time_based=1 00:09:06.297 runtime=10 00:09:06.297 ioengine=libaio 00:09:06.297 direct=1 00:09:06.297 bs=4096 00:09:06.297 iodepth=1 00:09:06.297 norandommap=1 00:09:06.297 numjobs=1 00:09:06.297 00:09:06.297 [job0] 00:09:06.297 filename=/dev/nvme0n1 00:09:06.297 [job1] 00:09:06.297 filename=/dev/nvme0n2 00:09:06.297 [job2] 00:09:06.297 filename=/dev/nvme0n3 00:09:06.297 [job3] 00:09:06.297 filename=/dev/nvme0n4 00:09:06.297 Could not set queue depth (nvme0n1) 00:09:06.297 Could not set queue depth (nvme0n2) 00:09:06.297 Could not set queue depth (nvme0n3) 00:09:06.297 Could not set queue depth (nvme0n4) 00:09:06.297 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.297 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.297 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.297 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:06.297 fio-3.35 00:09:06.297 Starting 4 threads 00:09:09.577 03:59:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:09.577 03:59:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:09.577 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=303104, buflen=4096 00:09:09.577 fio: pid=159770, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:09.835 03:59:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:09.835 03:59:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:09.835 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=4165632, buflen=4096 00:09:09.835 fio: pid=159769, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:10.095 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=29044736, buflen=4096 00:09:10.095 fio: pid=159767, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:10.095 03:59:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.095 03:59:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:10.354 03:59:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.354 03:59:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:10.354 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=10534912, buflen=4096 00:09:10.354 fio: pid=159768, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:09:10.354 00:09:10.354 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=159767: Mon Dec 9 03:59:38 2024 00:09:10.354 read: IOPS=2022, BW=8090KiB/s (8284kB/s)(27.7MiB/3506msec) 00:09:10.354 slat (usec): min=4, max=10947, avg=15.06, stdev=210.20 00:09:10.354 clat (usec): min=171, max=41992, avg=473.32, stdev=3013.47 00:09:10.354 lat (usec): min=177, max=51812, avg=488.38, stdev=3041.51 00:09:10.354 clat percentiles (usec): 00:09:10.354 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 208], 00:09:10.354 | 30.00th=[ 217], 40.00th=[ 227], 50.00th=[ 235], 60.00th=[ 243], 00:09:10.354 | 70.00th=[ 251], 80.00th=[ 265], 90.00th=[ 318], 95.00th=[ 396], 00:09:10.354 | 99.00th=[ 529], 99.50th=[40633], 99.90th=[41157], 99.95th=[41681], 00:09:10.354 | 99.99th=[42206] 00:09:10.354 bw ( KiB/s): min= 160, max=15768, per=79.92%, avg=9009.33, stdev=7127.75, samples=6 00:09:10.354 iops : min= 40, max= 3942, avg=2252.33, stdev=1781.94, samples=6 00:09:10.354 lat (usec) : 250=68.92%, 500=29.10%, 750=1.35%, 1000=0.01% 00:09:10.354 lat (msec) : 2=0.04%, 50=0.55% 00:09:10.354 cpu : usr=1.54%, sys=3.00%, ctx=7100, majf=0, minf=1 00:09:10.354 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.354 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.354 issued rwts: total=7092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.354 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.354 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=159768: Mon Dec 9 03:59:38 2024 00:09:10.354 read: IOPS=674, BW=2696KiB/s (2761kB/s)(10.0MiB/3816msec) 00:09:10.354 slat (usec): min=3, max=8599, avg=15.98, stdev=263.38 00:09:10.354 clat (usec): min=157, max=42028, avg=1466.13, stdev=7030.01 00:09:10.354 lat (usec): min=162, max=42043, avg=1479.50, stdev=7035.30 00:09:10.354 clat percentiles (usec): 00:09:10.354 | 1.00th=[ 167], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 190], 00:09:10.354 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:09:10.354 | 70.00th=[ 221], 80.00th=[ 243], 90.00th=[ 269], 95.00th=[ 375], 00:09:10.354 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:10.354 | 99.99th=[42206] 00:09:10.354 bw ( KiB/s): min= 96, max=10824, per=24.15%, avg=2722.29, stdev=4569.83, samples=7 00:09:10.354 iops : min= 24, max= 2706, avg=680.57, stdev=1142.46, samples=7 00:09:10.354 lat (usec) : 250=83.56%, 500=12.98%, 750=0.31% 00:09:10.354 lat (msec) : 2=0.04%, 50=3.07% 00:09:10.354 cpu : usr=0.13%, sys=0.76%, ctx=2578, majf=0, minf=1 00:09:10.354 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.354 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.354 issued rwts: total=2573,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.354 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.354 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=159769: Mon Dec 9 03:59:38 2024 00:09:10.354 read: IOPS=312, BW=1250KiB/s (1280kB/s)(4068KiB/3254msec) 00:09:10.354 slat (usec): min=5, max=8918, avg=22.77, stdev=279.13 00:09:10.354 clat (usec): min=192, max=41993, avg=3163.96, stdev=10388.15 00:09:10.354 lat (usec): min=198, max=42028, avg=3186.73, stdev=10391.91 00:09:10.354 clat percentiles (usec): 00:09:10.354 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 231], 00:09:10.354 | 30.00th=[ 273], 40.00th=[ 297], 50.00th=[ 318], 60.00th=[ 330], 00:09:10.354 | 70.00th=[ 347], 80.00th=[ 453], 90.00th=[ 510], 95.00th=[41157], 00:09:10.354 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:10.354 | 99.99th=[42206] 00:09:10.354 bw ( KiB/s): min= 96, max= 7544, per=11.93%, avg=1345.33, stdev=3036.72, samples=6 00:09:10.354 iops : min= 24, max= 1886, avg=336.33, stdev=759.18, samples=6 00:09:10.354 lat (usec) : 250=25.93%, 500=61.69%, 750=5.30% 00:09:10.354 lat (msec) : 50=6.97% 00:09:10.354 cpu : usr=0.15%, sys=0.52%, ctx=1020, majf=0, minf=1 00:09:10.354 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.354 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.354 issued rwts: total=1018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.354 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.354 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=159770: Mon Dec 9 03:59:38 2024 00:09:10.354 read: IOPS=25, BW=100KiB/s (103kB/s)(296KiB/2946msec) 00:09:10.354 slat (nsec): min=12911, max=45911, avg=22389.53, stdev=8803.28 00:09:10.354 clat (usec): min=311, max=42049, avg=39460.74, stdev=8101.56 00:09:10.354 lat (usec): min=324, max=42064, avg=39483.00, stdev=8101.40 00:09:10.354 clat percentiles (usec): 00:09:10.354 | 1.00th=[ 310], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:10.355 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:10.355 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:09:10.355 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:10.355 | 99.99th=[42206] 00:09:10.355 bw ( KiB/s): min= 96, max= 112, per=0.89%, avg=100.80, stdev= 7.16, samples=5 00:09:10.355 iops : min= 24, max= 28, avg=25.20, stdev= 1.79, samples=5 00:09:10.355 lat (usec) : 500=4.00% 00:09:10.355 lat (msec) : 50=94.67% 00:09:10.355 cpu : usr=0.00%, sys=0.07%, ctx=75, majf=0, minf=2 00:09:10.355 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:10.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.355 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.355 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.355 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:10.355 00:09:10.355 Run status group 0 (all jobs): 00:09:10.355 READ: bw=11.0MiB/s (11.5MB/s), 100KiB/s-8090KiB/s (103kB/s-8284kB/s), io=42.0MiB (44.0MB), run=2946-3816msec 00:09:10.355 00:09:10.355 Disk stats (read/write): 00:09:10.355 nvme0n1: ios=6891/0, merge=0/0, ticks=4357/0, in_queue=4357, util=98.66% 00:09:10.355 nvme0n2: ios=2566/0, merge=0/0, ticks=3516/0, in_queue=3516, util=96.06% 00:09:10.355 nvme0n3: ios=1067/0, merge=0/0, ticks=3483/0, in_queue=3483, util=99.13% 00:09:10.355 nvme0n4: ios=72/0, merge=0/0, ticks=2840/0, in_queue=2840, util=96.71% 00:09:10.613 03:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.613 03:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:10.872 03:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:10.872 03:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:11.131 03:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:11.131 03:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:11.389 03:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:11.389 03:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:11.646 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:11.646 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 159674 00:09:11.646 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:11.646 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.904 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:11.904 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:11.904 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:11.904 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.904 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:11.904 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.904 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:11.904 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:11.904 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:11.904 nvmf hotplug test: fio failed as expected 00:09:11.904 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:12.161 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:12.161 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:12.161 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:12.161 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:12.161 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:12.161 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:12.161 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:12.161 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:12.161 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:12.162 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:12.162 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:12.162 rmmod nvme_tcp 00:09:12.162 rmmod nvme_fabrics 00:09:12.162 rmmod nvme_keyring 00:09:12.162 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:12.162 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:12.162 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:12.162 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 157643 ']' 00:09:12.162 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 157643 00:09:12.162 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 157643 ']' 00:09:12.162 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 157643 00:09:12.162 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:12.162 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:12.162 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 157643 00:09:12.162 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:12.162 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:12.162 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 157643' 00:09:12.162 killing process with pid 157643 00:09:12.162 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 157643 00:09:12.162 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 157643 00:09:12.422 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:12.422 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:12.422 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:12.422 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:12.422 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:12.422 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:12.422 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:12.422 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:12.422 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:12.422 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.422 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:12.422 03:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.958 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:14.958 00:09:14.958 real 0m24.427s 00:09:14.958 user 1m25.560s 00:09:14.958 sys 0m6.609s 00:09:14.958 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.958 03:59:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:14.958 ************************************ 00:09:14.958 END TEST nvmf_fio_target 00:09:14.958 ************************************ 00:09:14.958 03:59:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:14.958 03:59:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:14.958 03:59:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.958 03:59:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:14.958 ************************************ 00:09:14.958 START TEST nvmf_bdevio 00:09:14.958 ************************************ 00:09:14.958 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:14.958 * Looking for test storage... 00:09:14.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:14.958 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:14.958 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:09:14.958 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:14.958 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:14.958 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.958 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.958 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.958 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.958 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.958 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.958 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.958 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.958 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:14.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.959 --rc genhtml_branch_coverage=1 00:09:14.959 --rc genhtml_function_coverage=1 00:09:14.959 --rc genhtml_legend=1 00:09:14.959 --rc geninfo_all_blocks=1 00:09:14.959 --rc geninfo_unexecuted_blocks=1 00:09:14.959 00:09:14.959 ' 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:14.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.959 --rc genhtml_branch_coverage=1 00:09:14.959 --rc genhtml_function_coverage=1 00:09:14.959 --rc genhtml_legend=1 00:09:14.959 --rc geninfo_all_blocks=1 00:09:14.959 --rc geninfo_unexecuted_blocks=1 00:09:14.959 00:09:14.959 ' 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:14.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.959 --rc genhtml_branch_coverage=1 00:09:14.959 --rc genhtml_function_coverage=1 00:09:14.959 --rc genhtml_legend=1 00:09:14.959 --rc geninfo_all_blocks=1 00:09:14.959 --rc geninfo_unexecuted_blocks=1 00:09:14.959 00:09:14.959 ' 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:14.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.959 --rc genhtml_branch_coverage=1 00:09:14.959 --rc genhtml_function_coverage=1 00:09:14.959 --rc genhtml_legend=1 00:09:14.959 --rc geninfo_all_blocks=1 00:09:14.959 --rc geninfo_unexecuted_blocks=1 00:09:14.959 00:09:14.959 ' 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:14.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:14.959 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:09:14.960 03:59:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:16.870 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:16.870 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.870 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:16.871 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:16.871 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:16.871 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:17.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:09:17.129 00:09:17.129 --- 10.0.0.2 ping statistics --- 00:09:17.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.129 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:09:17.129 00:09:17.129 --- 10.0.0.1 ping statistics --- 00:09:17.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.129 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=162511 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 162511 00:09:17.129 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 162511 ']' 00:09:17.130 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.130 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.130 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.130 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.130 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.130 [2024-12-09 03:59:45.575714] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:09:17.130 [2024-12-09 03:59:45.575822] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.130 [2024-12-09 03:59:45.652031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.388 [2024-12-09 03:59:45.716162] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.388 [2024-12-09 03:59:45.716213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.388 [2024-12-09 03:59:45.716242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.388 [2024-12-09 03:59:45.716254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.388 [2024-12-09 03:59:45.716264] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.388 [2024-12-09 03:59:45.718017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:17.388 [2024-12-09 03:59:45.718081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:17.388 [2024-12-09 03:59:45.718133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:17.388 [2024-12-09 03:59:45.718136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.388 [2024-12-09 03:59:45.869412] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.388 Malloc0 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.388 [2024-12-09 03:59:45.929694] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:17.388 { 00:09:17.388 "params": { 00:09:17.388 "name": "Nvme$subsystem", 00:09:17.388 "trtype": "$TEST_TRANSPORT", 00:09:17.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:17.388 "adrfam": "ipv4", 00:09:17.388 "trsvcid": "$NVMF_PORT", 00:09:17.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:17.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:17.388 "hdgst": ${hdgst:-false}, 00:09:17.388 "ddgst": ${ddgst:-false} 00:09:17.388 }, 00:09:17.388 "method": "bdev_nvme_attach_controller" 00:09:17.388 } 00:09:17.388 EOF 00:09:17.388 )") 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:17.388 03:59:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:17.388 "params": { 00:09:17.388 "name": "Nvme1", 00:09:17.388 "trtype": "tcp", 00:09:17.388 "traddr": "10.0.0.2", 00:09:17.388 "adrfam": "ipv4", 00:09:17.388 "trsvcid": "4420", 00:09:17.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:17.388 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:17.388 "hdgst": false, 00:09:17.388 "ddgst": false 00:09:17.388 }, 00:09:17.388 "method": "bdev_nvme_attach_controller" 00:09:17.388 }' 00:09:17.646 [2024-12-09 03:59:45.980028] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:09:17.646 [2024-12-09 03:59:45.980093] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162553 ] 00:09:17.646 [2024-12-09 03:59:46.048946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:17.646 [2024-12-09 03:59:46.113307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.646 [2024-12-09 03:59:46.113362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.646 [2024-12-09 03:59:46.113366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.904 I/O targets: 00:09:17.904 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:17.904 00:09:17.904 00:09:17.904 CUnit - A unit testing framework for C - Version 2.1-3 00:09:17.904 http://cunit.sourceforge.net/ 00:09:17.904 00:09:17.904 00:09:17.904 Suite: bdevio tests on: Nvme1n1 00:09:17.904 Test: blockdev write read block ...passed 00:09:18.162 Test: blockdev write zeroes read block ...passed 00:09:18.162 Test: blockdev write zeroes read no split ...passed 00:09:18.162 Test: blockdev write zeroes read split ...passed 00:09:18.163 Test: blockdev write zeroes read split partial ...passed 00:09:18.163 Test: blockdev reset ...[2024-12-09 03:59:46.533347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:18.163 [2024-12-09 03:59:46.533466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5f58c0 (9): Bad file descriptor 00:09:18.163 [2024-12-09 03:59:46.551533] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:18.163 passed 00:09:18.163 Test: blockdev write read 8 blocks ...passed 00:09:18.163 Test: blockdev write read size > 128k ...passed 00:09:18.163 Test: blockdev write read invalid size ...passed 00:09:18.163 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:18.163 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:18.163 Test: blockdev write read max offset ...passed 00:09:18.163 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:18.163 Test: blockdev writev readv 8 blocks ...passed 00:09:18.163 Test: blockdev writev readv 30 x 1block ...passed 00:09:18.421 Test: blockdev writev readv block ...passed 00:09:18.421 Test: blockdev writev readv size > 128k ...passed 00:09:18.421 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:18.421 Test: blockdev comparev and writev ...[2024-12-09 03:59:46.763890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.421 [2024-12-09 03:59:46.763925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:18.421 [2024-12-09 03:59:46.763949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.422 [2024-12-09 03:59:46.763966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:18.422 [2024-12-09 03:59:46.764289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.422 [2024-12-09 03:59:46.764314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:18.422 [2024-12-09 03:59:46.764335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.422 [2024-12-09 03:59:46.764351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:18.422 [2024-12-09 03:59:46.764647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.422 [2024-12-09 03:59:46.764671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:18.422 [2024-12-09 03:59:46.764707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.422 [2024-12-09 03:59:46.764724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:18.422 [2024-12-09 03:59:46.765031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.422 [2024-12-09 03:59:46.765054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:18.422 [2024-12-09 03:59:46.765075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.422 [2024-12-09 03:59:46.765091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:18.422 passed 00:09:18.422 Test: blockdev nvme passthru rw ...passed 00:09:18.422 Test: blockdev nvme passthru vendor specific ...[2024-12-09 03:59:46.847513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:18.422 [2024-12-09 03:59:46.847539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:18.422 [2024-12-09 03:59:46.847676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:18.422 [2024-12-09 03:59:46.847699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:18.422 [2024-12-09 03:59:46.847828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:18.422 [2024-12-09 03:59:46.847850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:18.422 [2024-12-09 03:59:46.847989] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:18.422 [2024-12-09 03:59:46.848011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:18.422 passed 00:09:18.422 Test: blockdev nvme admin passthru ...passed 00:09:18.422 Test: blockdev copy ...passed 00:09:18.422 00:09:18.422 Run Summary: Type Total Ran Passed Failed Inactive 00:09:18.422 suites 1 1 n/a 0 0 00:09:18.422 tests 23 23 23 0 0 00:09:18.422 asserts 152 152 152 0 n/a 00:09:18.422 00:09:18.422 Elapsed time = 0.964 seconds 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:18.681 rmmod nvme_tcp 00:09:18.681 rmmod nvme_fabrics 00:09:18.681 rmmod nvme_keyring 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 162511 ']' 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 162511 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 162511 ']' 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 162511 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 162511 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 162511' 00:09:18.681 killing process with pid 162511 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 162511 00:09:18.681 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 162511 00:09:18.941 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:18.941 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:18.941 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:18.941 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:18.941 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:18.941 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:18.941 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:18.941 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:18.941 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:18.941 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:18.941 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:18.941 03:59:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.485 03:59:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:21.485 00:09:21.485 real 0m6.471s 00:09:21.485 user 0m9.965s 00:09:21.485 sys 0m2.180s 00:09:21.485 03:59:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.485 03:59:49 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:21.485 ************************************ 00:09:21.485 END TEST nvmf_bdevio 00:09:21.485 ************************************ 00:09:21.485 03:59:49 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:21.485 00:09:21.485 real 3m57.375s 00:09:21.486 user 10m21.336s 00:09:21.486 sys 1m6.085s 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:21.486 ************************************ 00:09:21.486 END TEST nvmf_target_core 00:09:21.486 ************************************ 00:09:21.486 03:59:49 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:21.486 03:59:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:21.486 03:59:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.486 03:59:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:21.486 ************************************ 00:09:21.486 START TEST nvmf_target_extra 00:09:21.486 ************************************ 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:21.486 * Looking for test storage... 00:09:21.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:21.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.486 --rc genhtml_branch_coverage=1 00:09:21.486 --rc genhtml_function_coverage=1 00:09:21.486 --rc genhtml_legend=1 00:09:21.486 --rc geninfo_all_blocks=1 00:09:21.486 --rc geninfo_unexecuted_blocks=1 00:09:21.486 00:09:21.486 ' 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:21.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.486 --rc genhtml_branch_coverage=1 00:09:21.486 --rc genhtml_function_coverage=1 00:09:21.486 --rc genhtml_legend=1 00:09:21.486 --rc geninfo_all_blocks=1 00:09:21.486 --rc geninfo_unexecuted_blocks=1 00:09:21.486 00:09:21.486 ' 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:21.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.486 --rc genhtml_branch_coverage=1 00:09:21.486 --rc genhtml_function_coverage=1 00:09:21.486 --rc genhtml_legend=1 00:09:21.486 --rc geninfo_all_blocks=1 00:09:21.486 --rc geninfo_unexecuted_blocks=1 00:09:21.486 00:09:21.486 ' 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:21.486 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.486 --rc genhtml_branch_coverage=1 00:09:21.486 --rc genhtml_function_coverage=1 00:09:21.486 --rc genhtml_legend=1 00:09:21.486 --rc geninfo_all_blocks=1 00:09:21.486 --rc geninfo_unexecuted_blocks=1 00:09:21.486 00:09:21.486 ' 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:21.486 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:21.486 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:21.487 ************************************ 00:09:21.487 START TEST nvmf_example 00:09:21.487 ************************************ 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:21.487 * Looking for test storage... 00:09:21.487 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:21.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.487 --rc genhtml_branch_coverage=1 00:09:21.487 --rc genhtml_function_coverage=1 00:09:21.487 --rc genhtml_legend=1 00:09:21.487 --rc geninfo_all_blocks=1 00:09:21.487 --rc geninfo_unexecuted_blocks=1 00:09:21.487 00:09:21.487 ' 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:21.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.487 --rc genhtml_branch_coverage=1 00:09:21.487 --rc genhtml_function_coverage=1 00:09:21.487 --rc genhtml_legend=1 00:09:21.487 --rc geninfo_all_blocks=1 00:09:21.487 --rc geninfo_unexecuted_blocks=1 00:09:21.487 00:09:21.487 ' 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:21.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.487 --rc genhtml_branch_coverage=1 00:09:21.487 --rc genhtml_function_coverage=1 00:09:21.487 --rc genhtml_legend=1 00:09:21.487 --rc geninfo_all_blocks=1 00:09:21.487 --rc geninfo_unexecuted_blocks=1 00:09:21.487 00:09:21.487 ' 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:21.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.487 --rc genhtml_branch_coverage=1 00:09:21.487 --rc genhtml_function_coverage=1 00:09:21.487 --rc genhtml_legend=1 00:09:21.487 --rc geninfo_all_blocks=1 00:09:21.487 --rc geninfo_unexecuted_blocks=1 00:09:21.487 00:09:21.487 ' 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.487 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:21.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:09:21.488 03:59:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:24.027 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:24.027 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:24.027 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:24.027 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.027 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:24.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:09:24.028 00:09:24.028 --- 10.0.0.2 ping statistics --- 00:09:24.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.028 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:09:24.028 00:09:24.028 --- 10.0.0.1 ping statistics --- 00:09:24.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.028 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=164698 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 164698 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 164698 ']' 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:24.028 03:59:52 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:24.960 03:59:53 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:37.157 Initializing NVMe Controllers 00:09:37.158 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:37.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:37.158 Initialization complete. Launching workers. 00:09:37.158 ======================================================== 00:09:37.158 Latency(us) 00:09:37.158 Device Information : IOPS MiB/s Average min max 00:09:37.158 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14832.88 57.94 4314.57 896.47 15943.16 00:09:37.158 ======================================================== 00:09:37.158 Total : 14832.88 57.94 4314.57 896.47 15943.16 00:09:37.158 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:37.158 rmmod nvme_tcp 00:09:37.158 rmmod nvme_fabrics 00:09:37.158 rmmod nvme_keyring 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 164698 ']' 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 164698 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 164698 ']' 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 164698 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 164698 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 164698' 00:09:37.158 killing process with pid 164698 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 164698 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 164698 00:09:37.158 nvmf threads initialize successfully 00:09:37.158 bdev subsystem init successfully 00:09:37.158 created a nvmf target service 00:09:37.158 create targets's poll groups done 00:09:37.158 all subsystems of target started 00:09:37.158 nvmf target is running 00:09:37.158 all subsystems of target stopped 00:09:37.158 destroy targets's poll groups done 00:09:37.158 destroyed the nvmf target service 00:09:37.158 bdev subsystem finish successfully 00:09:37.158 nvmf threads destroy successfully 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.158 04:00:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:37.726 00:09:37.726 real 0m16.344s 00:09:37.726 user 0m46.061s 00:09:37.726 sys 0m3.362s 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:37.726 ************************************ 00:09:37.726 END TEST nvmf_example 00:09:37.726 ************************************ 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:37.726 ************************************ 00:09:37.726 START TEST nvmf_filesystem 00:09:37.726 ************************************ 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:37.726 * Looking for test storage... 00:09:37.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:37.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.726 --rc genhtml_branch_coverage=1 00:09:37.726 --rc genhtml_function_coverage=1 00:09:37.726 --rc genhtml_legend=1 00:09:37.726 --rc geninfo_all_blocks=1 00:09:37.726 --rc geninfo_unexecuted_blocks=1 00:09:37.726 00:09:37.726 ' 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:37.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.726 --rc genhtml_branch_coverage=1 00:09:37.726 --rc genhtml_function_coverage=1 00:09:37.726 --rc genhtml_legend=1 00:09:37.726 --rc geninfo_all_blocks=1 00:09:37.726 --rc geninfo_unexecuted_blocks=1 00:09:37.726 00:09:37.726 ' 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:37.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.726 --rc genhtml_branch_coverage=1 00:09:37.726 --rc genhtml_function_coverage=1 00:09:37.726 --rc genhtml_legend=1 00:09:37.726 --rc geninfo_all_blocks=1 00:09:37.726 --rc geninfo_unexecuted_blocks=1 00:09:37.726 00:09:37.726 ' 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:37.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.726 --rc genhtml_branch_coverage=1 00:09:37.726 --rc genhtml_function_coverage=1 00:09:37.726 --rc genhtml_legend=1 00:09:37.726 --rc geninfo_all_blocks=1 00:09:37.726 --rc geninfo_unexecuted_blocks=1 00:09:37.726 00:09:37.726 ' 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:37.726 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:37.727 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:37.727 #define SPDK_CONFIG_H 00:09:37.727 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:37.727 #define SPDK_CONFIG_APPS 1 00:09:37.727 #define SPDK_CONFIG_ARCH native 00:09:37.727 #undef SPDK_CONFIG_ASAN 00:09:37.727 #undef SPDK_CONFIG_AVAHI 00:09:37.727 #undef SPDK_CONFIG_CET 00:09:37.727 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:37.727 #define SPDK_CONFIG_COVERAGE 1 00:09:37.727 #define SPDK_CONFIG_CROSS_PREFIX 00:09:37.727 #undef SPDK_CONFIG_CRYPTO 00:09:37.727 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:37.727 #undef SPDK_CONFIG_CUSTOMOCF 00:09:37.727 #undef SPDK_CONFIG_DAOS 00:09:37.727 #define SPDK_CONFIG_DAOS_DIR 00:09:37.727 #define SPDK_CONFIG_DEBUG 1 00:09:37.727 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:37.727 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:37.727 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:37.727 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:37.727 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:37.727 #undef SPDK_CONFIG_DPDK_UADK 00:09:37.727 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:37.727 #define SPDK_CONFIG_EXAMPLES 1 00:09:37.727 #undef SPDK_CONFIG_FC 00:09:37.727 #define SPDK_CONFIG_FC_PATH 00:09:37.727 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:37.727 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:37.727 #define SPDK_CONFIG_FSDEV 1 00:09:37.727 #undef SPDK_CONFIG_FUSE 00:09:37.727 #undef SPDK_CONFIG_FUZZER 00:09:37.727 #define SPDK_CONFIG_FUZZER_LIB 00:09:37.727 #undef SPDK_CONFIG_GOLANG 00:09:37.727 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:37.727 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:37.727 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:37.727 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:37.727 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:37.727 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:37.727 #undef SPDK_CONFIG_HAVE_LZ4 00:09:37.727 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:37.728 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:37.728 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:37.728 #define SPDK_CONFIG_IDXD 1 00:09:37.728 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:37.728 #undef SPDK_CONFIG_IPSEC_MB 00:09:37.728 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:37.728 #define SPDK_CONFIG_ISAL 1 00:09:37.728 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:37.728 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:37.728 #define SPDK_CONFIG_LIBDIR 00:09:37.728 #undef SPDK_CONFIG_LTO 00:09:37.728 #define SPDK_CONFIG_MAX_LCORES 128 00:09:37.728 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:37.728 #define SPDK_CONFIG_NVME_CUSE 1 00:09:37.728 #undef SPDK_CONFIG_OCF 00:09:37.728 #define SPDK_CONFIG_OCF_PATH 00:09:37.728 #define SPDK_CONFIG_OPENSSL_PATH 00:09:37.728 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:37.728 #define SPDK_CONFIG_PGO_DIR 00:09:37.728 #undef SPDK_CONFIG_PGO_USE 00:09:37.728 #define SPDK_CONFIG_PREFIX /usr/local 00:09:37.728 #undef SPDK_CONFIG_RAID5F 00:09:37.728 #undef SPDK_CONFIG_RBD 00:09:37.728 #define SPDK_CONFIG_RDMA 1 00:09:37.728 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:37.728 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:37.728 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:37.728 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:37.728 #define SPDK_CONFIG_SHARED 1 00:09:37.728 #undef SPDK_CONFIG_SMA 00:09:37.728 #define SPDK_CONFIG_TESTS 1 00:09:37.728 #undef SPDK_CONFIG_TSAN 00:09:37.728 #define SPDK_CONFIG_UBLK 1 00:09:37.728 #define SPDK_CONFIG_UBSAN 1 00:09:37.728 #undef SPDK_CONFIG_UNIT_TESTS 00:09:37.728 #undef SPDK_CONFIG_URING 00:09:37.728 #define SPDK_CONFIG_URING_PATH 00:09:37.728 #undef SPDK_CONFIG_URING_ZNS 00:09:37.728 #undef SPDK_CONFIG_USDT 00:09:37.728 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:37.728 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:37.728 #define SPDK_CONFIG_VFIO_USER 1 00:09:37.728 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:37.728 #define SPDK_CONFIG_VHOST 1 00:09:37.728 #define SPDK_CONFIG_VIRTIO 1 00:09:37.728 #undef SPDK_CONFIG_VTUNE 00:09:37.728 #define SPDK_CONFIG_VTUNE_DIR 00:09:37.728 #define SPDK_CONFIG_WERROR 1 00:09:37.728 #define SPDK_CONFIG_WPDK_DIR 00:09:37.728 #undef SPDK_CONFIG_XNVME 00:09:37.728 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:37.728 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:37.729 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:37.991 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 166632 ]] 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 166632 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.ikf8J8 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ikf8J8/tests/target /tmp/spdk.ikf8J8 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=56177328128 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988528128 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5811200000 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30984232960 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375277568 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22429696 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993952768 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=311296 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:37.992 * Looking for test storage... 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=56177328128 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8025792512 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:37.992 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:37.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.993 --rc genhtml_branch_coverage=1 00:09:37.993 --rc genhtml_function_coverage=1 00:09:37.993 --rc genhtml_legend=1 00:09:37.993 --rc geninfo_all_blocks=1 00:09:37.993 --rc geninfo_unexecuted_blocks=1 00:09:37.993 00:09:37.993 ' 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:37.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.993 --rc genhtml_branch_coverage=1 00:09:37.993 --rc genhtml_function_coverage=1 00:09:37.993 --rc genhtml_legend=1 00:09:37.993 --rc geninfo_all_blocks=1 00:09:37.993 --rc geninfo_unexecuted_blocks=1 00:09:37.993 00:09:37.993 ' 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:37.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.993 --rc genhtml_branch_coverage=1 00:09:37.993 --rc genhtml_function_coverage=1 00:09:37.993 --rc genhtml_legend=1 00:09:37.993 --rc geninfo_all_blocks=1 00:09:37.993 --rc geninfo_unexecuted_blocks=1 00:09:37.993 00:09:37.993 ' 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:37.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:37.993 --rc genhtml_branch_coverage=1 00:09:37.993 --rc genhtml_function_coverage=1 00:09:37.993 --rc genhtml_legend=1 00:09:37.993 --rc geninfo_all_blocks=1 00:09:37.993 --rc geninfo_unexecuted_blocks=1 00:09:37.993 00:09:37.993 ' 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:37.993 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:09:37.993 04:00:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:40.527 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:40.527 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:09:40.527 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:40.527 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:40.527 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:40.527 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:40.527 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:40.527 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:09:40.527 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:40.527 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:09:40.527 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:09:40.527 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:09:40.527 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:09:40.527 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:09:40.527 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:09:40.527 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:40.527 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:40.527 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:40.527 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:40.528 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:40.528 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:40.528 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:40.528 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:40.528 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:40.528 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:09:40.528 00:09:40.528 --- 10.0.0.2 ping statistics --- 00:09:40.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.528 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:40.528 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:40.528 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:09:40.528 00:09:40.528 --- 10.0.0.1 ping statistics --- 00:09:40.528 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:40.528 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:40.528 ************************************ 00:09:40.528 START TEST nvmf_filesystem_no_in_capsule 00:09:40.528 ************************************ 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:40.528 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:40.529 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:40.529 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:40.529 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:40.529 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.529 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=168739 00:09:40.529 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 168739 00:09:40.529 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:40.529 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 168739 ']' 00:09:40.529 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:40.529 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:40.529 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:40.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:40.529 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:40.529 04:00:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.529 [2024-12-09 04:00:08.858945] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:09:40.529 [2024-12-09 04:00:08.859015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:40.529 [2024-12-09 04:00:08.936186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:40.529 [2024-12-09 04:00:08.996183] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:40.529 [2024-12-09 04:00:08.996236] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:40.529 [2024-12-09 04:00:08.996264] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:40.529 [2024-12-09 04:00:08.996283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:40.529 [2024-12-09 04:00:08.996294] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:40.529 [2024-12-09 04:00:08.997809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.529 [2024-12-09 04:00:08.997837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:40.529 [2024-12-09 04:00:08.997896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:40.529 [2024-12-09 04:00:08.997899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.786 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.786 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:40.786 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:40.786 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:40.786 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.786 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.786 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:40.786 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:40.786 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.786 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.786 [2024-12-09 04:00:09.139997] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:40.786 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.786 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:40.786 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.786 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.786 Malloc1 00:09:40.786 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.786 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:40.786 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.786 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.787 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.787 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:40.787 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.787 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.787 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.787 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:40.787 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.787 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.787 [2024-12-09 04:00:09.332858] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:40.787 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.787 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:40.787 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:40.787 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:40.787 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:40.787 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:40.787 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:40.787 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.787 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:40.787 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.787 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:40.787 { 00:09:40.787 "name": "Malloc1", 00:09:40.787 "aliases": [ 00:09:40.787 "690ef8a0-6331-4ce3-9424-a00e4af070cc" 00:09:40.787 ], 00:09:40.787 "product_name": "Malloc disk", 00:09:40.787 "block_size": 512, 00:09:40.787 "num_blocks": 1048576, 00:09:40.787 "uuid": "690ef8a0-6331-4ce3-9424-a00e4af070cc", 00:09:40.787 "assigned_rate_limits": { 00:09:40.787 "rw_ios_per_sec": 0, 00:09:40.787 "rw_mbytes_per_sec": 0, 00:09:40.787 "r_mbytes_per_sec": 0, 00:09:40.787 "w_mbytes_per_sec": 0 00:09:40.787 }, 00:09:40.787 "claimed": true, 00:09:40.787 "claim_type": "exclusive_write", 00:09:40.787 "zoned": false, 00:09:40.787 "supported_io_types": { 00:09:40.787 "read": true, 00:09:40.787 "write": true, 00:09:40.787 "unmap": true, 00:09:40.787 "flush": true, 00:09:40.787 "reset": true, 00:09:40.787 "nvme_admin": false, 00:09:40.787 "nvme_io": false, 00:09:40.787 "nvme_io_md": false, 00:09:40.787 "write_zeroes": true, 00:09:40.787 "zcopy": true, 00:09:40.787 "get_zone_info": false, 00:09:40.787 "zone_management": false, 00:09:40.787 "zone_append": false, 00:09:40.787 "compare": false, 00:09:40.787 "compare_and_write": false, 00:09:40.787 "abort": true, 00:09:40.787 "seek_hole": false, 00:09:40.787 "seek_data": false, 00:09:40.787 "copy": true, 00:09:40.787 "nvme_iov_md": false 00:09:40.787 }, 00:09:40.787 "memory_domains": [ 00:09:40.787 { 00:09:40.787 "dma_device_id": "system", 00:09:40.787 "dma_device_type": 1 00:09:40.787 }, 00:09:40.787 { 00:09:40.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.787 "dma_device_type": 2 00:09:40.787 } 00:09:40.787 ], 00:09:40.787 "driver_specific": {} 00:09:40.787 } 00:09:40.787 ]' 00:09:40.787 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:41.044 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:41.044 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:41.044 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:41.044 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:41.045 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:41.045 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:41.045 04:00:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:41.610 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:41.610 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:41.610 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:41.610 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:41.610 04:00:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:44.137 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:44.137 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:44.137 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:44.137 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:44.137 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:44.137 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:44.137 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:44.137 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:44.137 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:44.137 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:44.137 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:44.137 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:44.137 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:44.137 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:44.137 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:44.137 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:44.137 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:44.137 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:44.395 04:00:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:45.783 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:45.783 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:45.783 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:45.783 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.783 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:45.783 ************************************ 00:09:45.783 START TEST filesystem_ext4 00:09:45.783 ************************************ 00:09:45.783 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:45.783 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:45.783 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:45.783 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:45.783 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:45.783 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:45.783 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:45.783 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:45.783 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:45.783 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:45.783 04:00:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:45.783 mke2fs 1.47.0 (5-Feb-2023) 00:09:45.783 Discarding device blocks: 0/522240 done 00:09:45.783 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:45.783 Filesystem UUID: 0a73c6fd-23c5-491d-946d-7999aaf48e7f 00:09:45.783 Superblock backups stored on blocks: 00:09:45.783 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:45.783 00:09:45.783 Allocating group tables: 0/64 done 00:09:45.783 Writing inode tables: 0/64 done 00:09:46.346 Creating journal (8192 blocks): done 00:09:47.276 Writing superblocks and filesystem accounting information: 0/64 done 00:09:47.276 00:09:47.276 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:47.276 04:00:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 168739 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:53.830 00:09:53.830 real 0m7.952s 00:09:53.830 user 0m0.015s 00:09:53.830 sys 0m0.093s 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:53.830 ************************************ 00:09:53.830 END TEST filesystem_ext4 00:09:53.830 ************************************ 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.830 ************************************ 00:09:53.830 START TEST filesystem_btrfs 00:09:53.830 ************************************ 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:53.830 04:00:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:53.830 btrfs-progs v6.8.1 00:09:53.830 See https://btrfs.readthedocs.io for more information. 00:09:53.830 00:09:53.830 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:53.830 NOTE: several default settings have changed in version 5.15, please make sure 00:09:53.830 this does not affect your deployments: 00:09:53.830 - DUP for metadata (-m dup) 00:09:53.830 - enabled no-holes (-O no-holes) 00:09:53.830 - enabled free-space-tree (-R free-space-tree) 00:09:53.830 00:09:53.830 Label: (null) 00:09:53.830 UUID: 9c23c8bc-bb9f-4c59-916f-60423f704582 00:09:53.830 Node size: 16384 00:09:53.830 Sector size: 4096 (CPU page size: 4096) 00:09:53.830 Filesystem size: 510.00MiB 00:09:53.830 Block group profiles: 00:09:53.830 Data: single 8.00MiB 00:09:53.830 Metadata: DUP 32.00MiB 00:09:53.830 System: DUP 8.00MiB 00:09:53.830 SSD detected: yes 00:09:53.830 Zoned device: no 00:09:53.830 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:53.830 Checksum: crc32c 00:09:53.830 Number of devices: 1 00:09:53.830 Devices: 00:09:53.830 ID SIZE PATH 00:09:53.830 1 510.00MiB /dev/nvme0n1p1 00:09:53.830 00:09:53.830 04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:53.830 04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:54.766 04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:54.766 04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:54.766 04:00:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:54.766 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:54.766 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:54.766 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:54.766 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 168739 00:09:54.766 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:54.766 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:54.766 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:54.766 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:54.766 00:09:54.766 real 0m1.095s 00:09:54.766 user 0m0.014s 00:09:54.766 sys 0m0.129s 00:09:54.766 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.766 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:54.766 ************************************ 00:09:54.766 END TEST filesystem_btrfs 00:09:54.767 ************************************ 00:09:54.767 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:54.767 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:54.767 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.767 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:54.767 ************************************ 00:09:54.767 START TEST filesystem_xfs 00:09:54.767 ************************************ 00:09:54.767 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:54.767 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:54.767 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:54.767 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:54.767 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:54.767 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:54.767 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:54.767 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:09:54.767 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:54.767 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:54.767 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:54.767 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:54.767 = sectsz=512 attr=2, projid32bit=1 00:09:54.767 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:54.767 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:54.767 data = bsize=4096 blocks=130560, imaxpct=25 00:09:54.767 = sunit=0 swidth=0 blks 00:09:54.767 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:54.767 log =internal log bsize=4096 blocks=16384, version=2 00:09:54.767 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:54.767 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:55.701 Discarding blocks...Done. 00:09:55.701 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:55.701 04:00:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:59.004 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:59.004 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:59.004 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:59.004 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:59.004 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:59.004 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:59.004 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 168739 00:09:59.004 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:59.004 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:59.004 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:59.004 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:59.004 00:09:59.004 real 0m3.860s 00:09:59.004 user 0m0.018s 00:09:59.004 sys 0m0.089s 00:09:59.004 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.004 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:59.004 ************************************ 00:09:59.004 END TEST filesystem_xfs 00:09:59.004 ************************************ 00:09:59.004 04:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:59.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 168739 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 168739 ']' 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 168739 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 168739 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 168739' 00:09:59.004 killing process with pid 168739 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 168739 00:09:59.004 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 168739 00:09:59.262 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:59.262 00:09:59.262 real 0m18.978s 00:09:59.262 user 1m13.589s 00:09:59.262 sys 0m2.297s 00:09:59.262 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.262 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.262 ************************************ 00:09:59.262 END TEST nvmf_filesystem_no_in_capsule 00:09:59.262 ************************************ 00:09:59.262 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:59.262 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:59.262 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.262 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.553 ************************************ 00:09:59.553 START TEST nvmf_filesystem_in_capsule 00:09:59.553 ************************************ 00:09:59.553 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:09:59.553 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:59.553 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:59.553 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:59.553 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:59.553 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.553 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=171269 00:09:59.553 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:59.553 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 171269 00:09:59.553 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 171269 ']' 00:09:59.553 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.553 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.553 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.553 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.553 04:00:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.553 [2024-12-09 04:00:27.896692] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:09:59.553 [2024-12-09 04:00:27.896805] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.553 [2024-12-09 04:00:27.972164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.553 [2024-12-09 04:00:28.032506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.553 [2024-12-09 04:00:28.032567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.553 [2024-12-09 04:00:28.032596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.553 [2024-12-09 04:00:28.032607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.553 [2024-12-09 04:00:28.032617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.553 [2024-12-09 04:00:28.034067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.553 [2024-12-09 04:00:28.034127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.553 [2024-12-09 04:00:28.034150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.553 [2024-12-09 04:00:28.034157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.811 [2024-12-09 04:00:28.179847] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.811 Malloc1 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.811 [2024-12-09 04:00:28.359846] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.811 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:59.811 { 00:09:59.811 "name": "Malloc1", 00:09:59.811 "aliases": [ 00:09:59.811 "69911007-2c1a-4d99-9fe8-055371d313d7" 00:09:59.811 ], 00:09:59.811 "product_name": "Malloc disk", 00:09:59.811 "block_size": 512, 00:09:59.811 "num_blocks": 1048576, 00:09:59.811 "uuid": "69911007-2c1a-4d99-9fe8-055371d313d7", 00:09:59.811 "assigned_rate_limits": { 00:09:59.811 "rw_ios_per_sec": 0, 00:09:59.811 "rw_mbytes_per_sec": 0, 00:09:59.811 "r_mbytes_per_sec": 0, 00:09:59.811 "w_mbytes_per_sec": 0 00:09:59.811 }, 00:09:59.811 "claimed": true, 00:09:59.811 "claim_type": "exclusive_write", 00:09:59.811 "zoned": false, 00:09:59.811 "supported_io_types": { 00:09:59.811 "read": true, 00:09:59.811 "write": true, 00:09:59.811 "unmap": true, 00:09:59.811 "flush": true, 00:09:59.811 "reset": true, 00:09:59.811 "nvme_admin": false, 00:09:59.811 "nvme_io": false, 00:09:59.811 "nvme_io_md": false, 00:09:59.811 "write_zeroes": true, 00:09:59.811 "zcopy": true, 00:09:59.811 "get_zone_info": false, 00:09:59.811 "zone_management": false, 00:09:59.811 "zone_append": false, 00:09:59.811 "compare": false, 00:09:59.811 "compare_and_write": false, 00:09:59.811 "abort": true, 00:09:59.811 "seek_hole": false, 00:09:59.811 "seek_data": false, 00:09:59.811 "copy": true, 00:09:59.811 "nvme_iov_md": false 00:09:59.811 }, 00:09:59.811 "memory_domains": [ 00:09:59.811 { 00:09:59.812 "dma_device_id": "system", 00:09:59.812 "dma_device_type": 1 00:09:59.812 }, 00:09:59.812 { 00:09:59.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.812 "dma_device_type": 2 00:09:59.812 } 00:09:59.812 ], 00:09:59.812 "driver_specific": {} 00:09:59.812 } 00:09:59.812 ]' 00:09:59.812 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:00.069 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:00.069 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:00.069 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:00.069 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:00.069 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:00.069 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:00.069 04:00:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:00.635 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:00.635 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:00.636 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:00.636 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:00.636 04:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:02.532 04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:02.532 04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:02.532 04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:02.532 04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:02.532 04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:02.532 04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:02.532 04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:02.532 04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:02.532 04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:02.532 04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:02.532 04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:02.532 04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:02.532 04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:02.532 04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:02.532 04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:02.790 04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:02.790 04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:02.790 04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:03.355 04:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:04.288 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:04.288 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:04.288 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:04.288 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.288 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.288 ************************************ 00:10:04.288 START TEST filesystem_in_capsule_ext4 00:10:04.288 ************************************ 00:10:04.288 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:04.288 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:04.288 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:04.288 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:04.288 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:04.288 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:04.288 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:04.288 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:04.288 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:04.288 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:04.288 04:00:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:04.288 mke2fs 1.47.0 (5-Feb-2023) 00:10:04.546 Discarding device blocks: 0/522240 done 00:10:04.546 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:04.546 Filesystem UUID: 987be2e3-9211-403e-b0ee-e3ae0e56528a 00:10:04.546 Superblock backups stored on blocks: 00:10:04.546 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:04.546 00:10:04.546 Allocating group tables: 0/64 done 00:10:04.546 Writing inode tables: 0/64 done 00:10:04.804 Creating journal (8192 blocks): done 00:10:05.887 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:10:05.887 00:10:05.888 04:00:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:05.888 04:00:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:12.442 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:12.442 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:12.442 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:12.442 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:12.442 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 171269 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:12.443 00:10:12.443 real 0m7.077s 00:10:12.443 user 0m0.019s 00:10:12.443 sys 0m0.067s 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:12.443 ************************************ 00:10:12.443 END TEST filesystem_in_capsule_ext4 00:10:12.443 ************************************ 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.443 ************************************ 00:10:12.443 START TEST filesystem_in_capsule_btrfs 00:10:12.443 ************************************ 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:12.443 04:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:12.443 btrfs-progs v6.8.1 00:10:12.443 See https://btrfs.readthedocs.io for more information. 00:10:12.443 00:10:12.443 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:12.443 NOTE: several default settings have changed in version 5.15, please make sure 00:10:12.443 this does not affect your deployments: 00:10:12.443 - DUP for metadata (-m dup) 00:10:12.443 - enabled no-holes (-O no-holes) 00:10:12.443 - enabled free-space-tree (-R free-space-tree) 00:10:12.443 00:10:12.443 Label: (null) 00:10:12.443 UUID: 72304c39-1933-4684-95bf-64fb13a0d1c3 00:10:12.443 Node size: 16384 00:10:12.443 Sector size: 4096 (CPU page size: 4096) 00:10:12.443 Filesystem size: 510.00MiB 00:10:12.443 Block group profiles: 00:10:12.443 Data: single 8.00MiB 00:10:12.443 Metadata: DUP 32.00MiB 00:10:12.443 System: DUP 8.00MiB 00:10:12.443 SSD detected: yes 00:10:12.443 Zoned device: no 00:10:12.443 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:12.443 Checksum: crc32c 00:10:12.443 Number of devices: 1 00:10:12.443 Devices: 00:10:12.443 ID SIZE PATH 00:10:12.443 1 510.00MiB /dev/nvme0n1p1 00:10:12.443 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 171269 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:12.443 00:10:12.443 real 0m0.528s 00:10:12.443 user 0m0.015s 00:10:12.443 sys 0m0.100s 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:12.443 ************************************ 00:10:12.443 END TEST filesystem_in_capsule_btrfs 00:10:12.443 ************************************ 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.443 ************************************ 00:10:12.443 START TEST filesystem_in_capsule_xfs 00:10:12.443 ************************************ 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:12.443 04:00:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:12.443 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:12.443 = sectsz=512 attr=2, projid32bit=1 00:10:12.443 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:12.443 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:12.443 data = bsize=4096 blocks=130560, imaxpct=25 00:10:12.443 = sunit=0 swidth=0 blks 00:10:12.443 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:12.443 log =internal log bsize=4096 blocks=16384, version=2 00:10:12.443 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:12.443 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:13.378 Discarding blocks...Done. 00:10:13.378 04:00:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:13.378 04:00:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:15.910 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:15.910 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:15.910 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:15.910 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:15.910 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:15.910 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:15.910 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 171269 00:10:15.910 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:15.910 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:15.910 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:15.910 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:15.910 00:10:15.910 real 0m3.562s 00:10:15.910 user 0m0.023s 00:10:15.910 sys 0m0.051s 00:10:15.910 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.910 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:15.910 ************************************ 00:10:15.910 END TEST filesystem_in_capsule_xfs 00:10:15.910 ************************************ 00:10:15.910 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:15.910 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:15.910 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:16.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 171269 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 171269 ']' 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 171269 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 171269 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 171269' 00:10:16.168 killing process with pid 171269 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 171269 00:10:16.168 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 171269 00:10:16.428 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:16.428 00:10:16.428 real 0m17.158s 00:10:16.428 user 1m6.453s 00:10:16.428 sys 0m2.127s 00:10:16.428 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.428 04:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:16.428 ************************************ 00:10:16.428 END TEST nvmf_filesystem_in_capsule 00:10:16.428 ************************************ 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:16.705 rmmod nvme_tcp 00:10:16.705 rmmod nvme_fabrics 00:10:16.705 rmmod nvme_keyring 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.705 04:00:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.612 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:18.612 00:10:18.612 real 0m41.014s 00:10:18.612 user 2m21.124s 00:10:18.612 sys 0m6.218s 00:10:18.612 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.612 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:18.612 ************************************ 00:10:18.612 END TEST nvmf_filesystem 00:10:18.612 ************************************ 00:10:18.612 04:00:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:18.612 04:00:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:18.612 04:00:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.612 04:00:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:18.612 ************************************ 00:10:18.612 START TEST nvmf_target_discovery 00:10:18.612 ************************************ 00:10:18.612 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:18.874 * Looking for test storage... 00:10:18.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:18.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.874 --rc genhtml_branch_coverage=1 00:10:18.874 --rc genhtml_function_coverage=1 00:10:18.874 --rc genhtml_legend=1 00:10:18.874 --rc geninfo_all_blocks=1 00:10:18.874 --rc geninfo_unexecuted_blocks=1 00:10:18.874 00:10:18.874 ' 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:18.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.874 --rc genhtml_branch_coverage=1 00:10:18.874 --rc genhtml_function_coverage=1 00:10:18.874 --rc genhtml_legend=1 00:10:18.874 --rc geninfo_all_blocks=1 00:10:18.874 --rc geninfo_unexecuted_blocks=1 00:10:18.874 00:10:18.874 ' 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:18.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.874 --rc genhtml_branch_coverage=1 00:10:18.874 --rc genhtml_function_coverage=1 00:10:18.874 --rc genhtml_legend=1 00:10:18.874 --rc geninfo_all_blocks=1 00:10:18.874 --rc geninfo_unexecuted_blocks=1 00:10:18.874 00:10:18.874 ' 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:18.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.874 --rc genhtml_branch_coverage=1 00:10:18.874 --rc genhtml_function_coverage=1 00:10:18.874 --rc genhtml_legend=1 00:10:18.874 --rc geninfo_all_blocks=1 00:10:18.874 --rc geninfo_unexecuted_blocks=1 00:10:18.874 00:10:18.874 ' 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.874 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:18.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:10:18.875 04:00:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:21.415 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:21.415 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:21.415 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:21.415 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:10:21.415 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:21.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:10:21.416 00:10:21.416 --- 10.0.0.2 ping statistics --- 00:10:21.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.416 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:10:21.416 00:10:21.416 --- 10.0.0.1 ping statistics --- 00:10:21.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.416 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=175433 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 175433 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 175433 ']' 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:21.416 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.416 [2024-12-09 04:00:49.746362] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:10:21.416 [2024-12-09 04:00:49.746443] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.416 [2024-12-09 04:00:49.814685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.416 [2024-12-09 04:00:49.869200] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.416 [2024-12-09 04:00:49.869258] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.416 [2024-12-09 04:00:49.869293] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.416 [2024-12-09 04:00:49.869305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.416 [2024-12-09 04:00:49.869314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.416 [2024-12-09 04:00:49.870897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.416 [2024-12-09 04:00:49.871004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.416 [2024-12-09 04:00:49.871083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.416 [2024-12-09 04:00:49.871086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.674 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:21.674 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:21.674 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:21.674 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:21.674 04:00:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.674 [2024-12-09 04:00:50.020838] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.674 Null1 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.674 [2024-12-09 04:00:50.076477] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.674 Null2 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.674 Null3 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.674 Null4 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.674 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.675 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:21.675 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.675 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.675 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.675 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:10:21.932 00:10:21.932 Discovery Log Number of Records 6, Generation counter 6 00:10:21.932 =====Discovery Log Entry 0====== 00:10:21.932 trtype: tcp 00:10:21.932 adrfam: ipv4 00:10:21.932 subtype: current discovery subsystem 00:10:21.932 treq: not required 00:10:21.932 portid: 0 00:10:21.932 trsvcid: 4420 00:10:21.932 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:21.932 traddr: 10.0.0.2 00:10:21.932 eflags: explicit discovery connections, duplicate discovery information 00:10:21.932 sectype: none 00:10:21.932 =====Discovery Log Entry 1====== 00:10:21.932 trtype: tcp 00:10:21.932 adrfam: ipv4 00:10:21.932 subtype: nvme subsystem 00:10:21.932 treq: not required 00:10:21.932 portid: 0 00:10:21.932 trsvcid: 4420 00:10:21.932 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:21.932 traddr: 10.0.0.2 00:10:21.932 eflags: none 00:10:21.932 sectype: none 00:10:21.932 =====Discovery Log Entry 2====== 00:10:21.932 trtype: tcp 00:10:21.932 adrfam: ipv4 00:10:21.932 subtype: nvme subsystem 00:10:21.932 treq: not required 00:10:21.932 portid: 0 00:10:21.932 trsvcid: 4420 00:10:21.932 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:21.932 traddr: 10.0.0.2 00:10:21.932 eflags: none 00:10:21.932 sectype: none 00:10:21.932 =====Discovery Log Entry 3====== 00:10:21.932 trtype: tcp 00:10:21.932 adrfam: ipv4 00:10:21.932 subtype: nvme subsystem 00:10:21.932 treq: not required 00:10:21.932 portid: 0 00:10:21.932 trsvcid: 4420 00:10:21.932 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:21.932 traddr: 10.0.0.2 00:10:21.932 eflags: none 00:10:21.932 sectype: none 00:10:21.932 =====Discovery Log Entry 4====== 00:10:21.932 trtype: tcp 00:10:21.932 adrfam: ipv4 00:10:21.932 subtype: nvme subsystem 00:10:21.932 treq: not required 00:10:21.932 portid: 0 00:10:21.932 trsvcid: 4420 00:10:21.932 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:21.932 traddr: 10.0.0.2 00:10:21.932 eflags: none 00:10:21.932 sectype: none 00:10:21.932 =====Discovery Log Entry 5====== 00:10:21.932 trtype: tcp 00:10:21.932 adrfam: ipv4 00:10:21.932 subtype: discovery subsystem referral 00:10:21.932 treq: not required 00:10:21.932 portid: 0 00:10:21.932 trsvcid: 4430 00:10:21.932 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:21.932 traddr: 10.0.0.2 00:10:21.932 eflags: none 00:10:21.932 sectype: none 00:10:21.932 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:21.932 Perform nvmf subsystem discovery via RPC 00:10:21.932 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:21.932 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.932 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.932 [ 00:10:21.932 { 00:10:21.932 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:21.932 "subtype": "Discovery", 00:10:21.932 "listen_addresses": [ 00:10:21.932 { 00:10:21.932 "trtype": "TCP", 00:10:21.932 "adrfam": "IPv4", 00:10:21.932 "traddr": "10.0.0.2", 00:10:21.932 "trsvcid": "4420" 00:10:21.932 } 00:10:21.932 ], 00:10:21.932 "allow_any_host": true, 00:10:21.932 "hosts": [] 00:10:21.932 }, 00:10:21.932 { 00:10:21.932 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:21.932 "subtype": "NVMe", 00:10:21.932 "listen_addresses": [ 00:10:21.932 { 00:10:21.932 "trtype": "TCP", 00:10:21.932 "adrfam": "IPv4", 00:10:21.932 "traddr": "10.0.0.2", 00:10:21.932 "trsvcid": "4420" 00:10:21.932 } 00:10:21.932 ], 00:10:21.932 "allow_any_host": true, 00:10:21.932 "hosts": [], 00:10:21.932 "serial_number": "SPDK00000000000001", 00:10:21.932 "model_number": "SPDK bdev Controller", 00:10:21.932 "max_namespaces": 32, 00:10:21.932 "min_cntlid": 1, 00:10:21.932 "max_cntlid": 65519, 00:10:21.932 "namespaces": [ 00:10:21.932 { 00:10:21.932 "nsid": 1, 00:10:21.932 "bdev_name": "Null1", 00:10:21.932 "name": "Null1", 00:10:21.932 "nguid": "59B1B72CF6084BECB8FE1895461A5878", 00:10:21.932 "uuid": "59b1b72c-f608-4bec-b8fe-1895461a5878" 00:10:21.932 } 00:10:21.932 ] 00:10:21.932 }, 00:10:21.932 { 00:10:21.932 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:21.932 "subtype": "NVMe", 00:10:21.932 "listen_addresses": [ 00:10:21.932 { 00:10:21.932 "trtype": "TCP", 00:10:21.932 "adrfam": "IPv4", 00:10:21.932 "traddr": "10.0.0.2", 00:10:21.932 "trsvcid": "4420" 00:10:21.932 } 00:10:21.932 ], 00:10:21.932 "allow_any_host": true, 00:10:21.932 "hosts": [], 00:10:21.932 "serial_number": "SPDK00000000000002", 00:10:21.932 "model_number": "SPDK bdev Controller", 00:10:21.932 "max_namespaces": 32, 00:10:21.932 "min_cntlid": 1, 00:10:21.932 "max_cntlid": 65519, 00:10:21.932 "namespaces": [ 00:10:21.932 { 00:10:21.932 "nsid": 1, 00:10:21.932 "bdev_name": "Null2", 00:10:21.932 "name": "Null2", 00:10:21.932 "nguid": "13B3F901C7B94C9B9B999792654FD38A", 00:10:21.932 "uuid": "13b3f901-c7b9-4c9b-9b99-9792654fd38a" 00:10:21.932 } 00:10:21.932 ] 00:10:21.932 }, 00:10:21.932 { 00:10:21.932 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:21.932 "subtype": "NVMe", 00:10:21.932 "listen_addresses": [ 00:10:21.932 { 00:10:21.932 "trtype": "TCP", 00:10:21.932 "adrfam": "IPv4", 00:10:21.932 "traddr": "10.0.0.2", 00:10:21.932 "trsvcid": "4420" 00:10:21.932 } 00:10:21.932 ], 00:10:21.932 "allow_any_host": true, 00:10:21.932 "hosts": [], 00:10:21.932 "serial_number": "SPDK00000000000003", 00:10:21.932 "model_number": "SPDK bdev Controller", 00:10:21.932 "max_namespaces": 32, 00:10:21.932 "min_cntlid": 1, 00:10:21.932 "max_cntlid": 65519, 00:10:21.932 "namespaces": [ 00:10:21.932 { 00:10:21.932 "nsid": 1, 00:10:21.932 "bdev_name": "Null3", 00:10:21.932 "name": "Null3", 00:10:21.932 "nguid": "0FC8C464AF99483A93989EDAAE782308", 00:10:21.932 "uuid": "0fc8c464-af99-483a-9398-9edaae782308" 00:10:21.932 } 00:10:21.932 ] 00:10:21.932 }, 00:10:21.932 { 00:10:21.932 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:21.932 "subtype": "NVMe", 00:10:21.932 "listen_addresses": [ 00:10:21.932 { 00:10:21.932 "trtype": "TCP", 00:10:21.933 "adrfam": "IPv4", 00:10:21.933 "traddr": "10.0.0.2", 00:10:21.933 "trsvcid": "4420" 00:10:21.933 } 00:10:21.933 ], 00:10:21.933 "allow_any_host": true, 00:10:21.933 "hosts": [], 00:10:21.933 "serial_number": "SPDK00000000000004", 00:10:21.933 "model_number": "SPDK bdev Controller", 00:10:21.933 "max_namespaces": 32, 00:10:21.933 "min_cntlid": 1, 00:10:21.933 "max_cntlid": 65519, 00:10:21.933 "namespaces": [ 00:10:21.933 { 00:10:21.933 "nsid": 1, 00:10:21.933 "bdev_name": "Null4", 00:10:21.933 "name": "Null4", 00:10:21.933 "nguid": "D0113796ACA144E180DC7291A3991365", 00:10:21.933 "uuid": "d0113796-aca1-44e1-80dc-7291a3991365" 00:10:21.933 } 00:10:21.933 ] 00:10:21.933 } 00:10:21.933 ] 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:21.933 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.190 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:22.190 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:22.190 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:22.190 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:22.190 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:22.190 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:10:22.190 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:22.190 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:10:22.190 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:22.190 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:22.190 rmmod nvme_tcp 00:10:22.190 rmmod nvme_fabrics 00:10:22.191 rmmod nvme_keyring 00:10:22.191 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:22.191 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:10:22.191 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:10:22.191 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 175433 ']' 00:10:22.191 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 175433 00:10:22.191 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 175433 ']' 00:10:22.191 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 175433 00:10:22.191 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:22.191 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.191 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 175433 00:10:22.191 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.191 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.191 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 175433' 00:10:22.191 killing process with pid 175433 00:10:22.191 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 175433 00:10:22.191 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 175433 00:10:22.451 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:22.451 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:22.451 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:22.451 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:10:22.451 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:10:22.451 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:22.451 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:10:22.451 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:22.451 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:22.451 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.451 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.451 04:00:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.358 04:00:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:24.358 00:10:24.358 real 0m5.705s 00:10:24.358 user 0m4.742s 00:10:24.358 sys 0m2.021s 00:10:24.358 04:00:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:24.358 04:00:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.358 ************************************ 00:10:24.358 END TEST nvmf_target_discovery 00:10:24.358 ************************************ 00:10:24.358 04:00:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:24.358 04:00:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:24.358 04:00:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:24.358 04:00:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:24.616 ************************************ 00:10:24.616 START TEST nvmf_referrals 00:10:24.616 ************************************ 00:10:24.616 04:00:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:24.616 * Looking for test storage... 00:10:24.616 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:24.616 04:00:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:24.616 04:00:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:10:24.616 04:00:52 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:24.616 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:24.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.617 --rc genhtml_branch_coverage=1 00:10:24.617 --rc genhtml_function_coverage=1 00:10:24.617 --rc genhtml_legend=1 00:10:24.617 --rc geninfo_all_blocks=1 00:10:24.617 --rc geninfo_unexecuted_blocks=1 00:10:24.617 00:10:24.617 ' 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:24.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.617 --rc genhtml_branch_coverage=1 00:10:24.617 --rc genhtml_function_coverage=1 00:10:24.617 --rc genhtml_legend=1 00:10:24.617 --rc geninfo_all_blocks=1 00:10:24.617 --rc geninfo_unexecuted_blocks=1 00:10:24.617 00:10:24.617 ' 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:24.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.617 --rc genhtml_branch_coverage=1 00:10:24.617 --rc genhtml_function_coverage=1 00:10:24.617 --rc genhtml_legend=1 00:10:24.617 --rc geninfo_all_blocks=1 00:10:24.617 --rc geninfo_unexecuted_blocks=1 00:10:24.617 00:10:24.617 ' 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:24.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.617 --rc genhtml_branch_coverage=1 00:10:24.617 --rc genhtml_function_coverage=1 00:10:24.617 --rc genhtml_legend=1 00:10:24.617 --rc geninfo_all_blocks=1 00:10:24.617 --rc geninfo_unexecuted_blocks=1 00:10:24.617 00:10:24.617 ' 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:24.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:24.617 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:24.618 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:24.618 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:24.618 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:24.618 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:24.618 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.618 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:24.618 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:24.618 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:24.618 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.618 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.618 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.618 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:24.618 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:24.618 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:10:24.618 04:00:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:27.155 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:27.155 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:27.155 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:27.155 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:27.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:27.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:10:27.155 00:10:27.155 --- 10.0.0.2 ping statistics --- 00:10:27.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.155 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:27.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:27.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:10:27.155 00:10:27.155 --- 10.0.0.1 ping statistics --- 00:10:27.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:27.155 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:10:27.155 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=177538 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 177538 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 177538 ']' 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:27.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:27.156 [2024-12-09 04:00:55.379626] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:10:27.156 [2024-12-09 04:00:55.379732] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:27.156 [2024-12-09 04:00:55.457000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:27.156 [2024-12-09 04:00:55.517221] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:27.156 [2024-12-09 04:00:55.517308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:27.156 [2024-12-09 04:00:55.517324] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:27.156 [2024-12-09 04:00:55.517335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:27.156 [2024-12-09 04:00:55.517345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:27.156 [2024-12-09 04:00:55.519036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:27.156 [2024-12-09 04:00:55.519115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:27.156 [2024-12-09 04:00:55.519093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:27.156 [2024-12-09 04:00:55.519118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:27.156 [2024-12-09 04:00:55.673879] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:27.156 [2024-12-09 04:00:55.698476] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.156 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:27.414 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.414 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:27.414 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:27.414 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:27.414 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:27.414 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:27.414 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.414 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:27.414 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:27.414 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.414 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:27.414 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:27.414 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:27.414 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:27.414 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:27.414 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:27.414 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:27.414 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:27.414 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:27.414 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:27.415 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:27.415 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.415 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:27.415 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.415 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:27.415 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.415 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:27.415 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.415 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:27.415 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.415 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:27.415 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.415 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:27.415 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.415 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:27.415 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:27.415 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.415 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:27.672 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:27.672 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:27.672 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:27.672 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:27.672 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:27.672 04:00:55 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:27.672 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:27.672 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:27.672 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:27.672 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.672 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:27.672 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.672 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:27.672 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.672 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:27.672 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.672 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:27.672 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:27.672 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:27.672 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:27.672 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:27.672 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:27.672 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:27.672 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:27.930 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:27.930 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:27.930 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:27.930 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:27.930 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:27.930 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:27.930 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:27.930 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:27.930 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:27.930 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:27.930 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:27.930 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:27.930 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:27.930 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:27.930 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:28.187 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:28.187 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:28.187 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:28.187 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:28.187 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:28.187 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:28.445 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:28.445 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:28.445 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.445 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:28.445 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.445 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:28.445 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:28.445 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:28.445 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:28.445 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.445 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:28.445 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:28.445 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.445 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:28.445 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:28.445 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:28.445 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:28.445 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:28.445 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:28.445 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:28.445 04:00:56 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:28.703 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:28.703 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:28.703 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:28.703 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:28.703 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:28.703 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:28.703 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:28.703 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:28.703 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:28.703 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:28.703 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:28.703 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:28.703 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:28.960 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:28.960 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:28.960 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.960 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:28.960 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.960 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:28.960 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:28.960 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.960 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:28.960 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.961 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:28.961 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:28.961 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:28.961 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:28.961 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:28.961 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:28.961 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:29.219 rmmod nvme_tcp 00:10:29.219 rmmod nvme_fabrics 00:10:29.219 rmmod nvme_keyring 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 177538 ']' 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 177538 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 177538 ']' 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 177538 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 177538 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 177538' 00:10:29.219 killing process with pid 177538 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 177538 00:10:29.219 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 177538 00:10:29.479 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:29.479 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:29.479 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:29.479 04:00:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:10:29.479 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:10:29.479 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:29.479 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:10:29.479 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:29.479 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:29.479 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.479 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.479 04:00:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:32.015 00:10:32.015 real 0m7.112s 00:10:32.015 user 0m11.409s 00:10:32.015 sys 0m2.264s 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:32.015 ************************************ 00:10:32.015 END TEST nvmf_referrals 00:10:32.015 ************************************ 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:32.015 ************************************ 00:10:32.015 START TEST nvmf_connect_disconnect 00:10:32.015 ************************************ 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:32.015 * Looking for test storage... 00:10:32.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:32.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.015 --rc genhtml_branch_coverage=1 00:10:32.015 --rc genhtml_function_coverage=1 00:10:32.015 --rc genhtml_legend=1 00:10:32.015 --rc geninfo_all_blocks=1 00:10:32.015 --rc geninfo_unexecuted_blocks=1 00:10:32.015 00:10:32.015 ' 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:32.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.015 --rc genhtml_branch_coverage=1 00:10:32.015 --rc genhtml_function_coverage=1 00:10:32.015 --rc genhtml_legend=1 00:10:32.015 --rc geninfo_all_blocks=1 00:10:32.015 --rc geninfo_unexecuted_blocks=1 00:10:32.015 00:10:32.015 ' 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:32.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.015 --rc genhtml_branch_coverage=1 00:10:32.015 --rc genhtml_function_coverage=1 00:10:32.015 --rc genhtml_legend=1 00:10:32.015 --rc geninfo_all_blocks=1 00:10:32.015 --rc geninfo_unexecuted_blocks=1 00:10:32.015 00:10:32.015 ' 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:32.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.015 --rc genhtml_branch_coverage=1 00:10:32.015 --rc genhtml_function_coverage=1 00:10:32.015 --rc genhtml_legend=1 00:10:32.015 --rc geninfo_all_blocks=1 00:10:32.015 --rc geninfo_unexecuted_blocks=1 00:10:32.015 00:10:32.015 ' 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.015 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:32.016 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:10:32.016 04:01:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:33.919 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:33.919 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:33.919 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:33.919 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:33.919 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:34.178 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:34.178 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:34.178 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:34.178 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:34.178 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:34.178 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:34.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:34.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:10:34.179 00:10:34.179 --- 10.0.0.2 ping statistics --- 00:10:34.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.179 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:34.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:34.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:10:34.179 00:10:34.179 --- 10.0.0.1 ping statistics --- 00:10:34.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.179 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=179855 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 179855 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 179855 ']' 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:34.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:34.179 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:34.179 [2024-12-09 04:01:02.656760] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:10:34.179 [2024-12-09 04:01:02.656844] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:34.179 [2024-12-09 04:01:02.726976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:34.438 [2024-12-09 04:01:02.783739] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:34.438 [2024-12-09 04:01:02.783802] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:34.438 [2024-12-09 04:01:02.783816] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:34.438 [2024-12-09 04:01:02.783841] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:34.438 [2024-12-09 04:01:02.783850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:34.438 [2024-12-09 04:01:02.785230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:34.438 [2024-12-09 04:01:02.785346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:34.438 [2024-12-09 04:01:02.785373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:34.438 [2024-12-09 04:01:02.785376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:34.438 [2024-12-09 04:01:02.935335] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.438 04:01:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:34.438 [2024-12-09 04:01:02.997841] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:34.438 04:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.438 04:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:34.438 04:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:34.438 04:01:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:37.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.482 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:48.482 rmmod nvme_tcp 00:10:48.482 rmmod nvme_fabrics 00:10:48.482 rmmod nvme_keyring 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 179855 ']' 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 179855 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 179855 ']' 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 179855 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 179855 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 179855' 00:10:48.482 killing process with pid 179855 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 179855 00:10:48.482 04:01:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 179855 00:10:48.754 04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:48.754 04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:48.754 04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:48.754 04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:10:48.754 04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:10:48.754 04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:48.754 04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:10:48.754 04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:48.754 04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:48.754 04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.755 04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.755 04:01:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.775 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:50.775 00:10:50.775 real 0m19.059s 00:10:50.775 user 0m56.929s 00:10:50.775 sys 0m3.449s 00:10:50.775 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.775 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:50.775 ************************************ 00:10:50.775 END TEST nvmf_connect_disconnect 00:10:50.775 ************************************ 00:10:50.775 04:01:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:50.775 04:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:50.775 04:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.776 04:01:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:50.776 ************************************ 00:10:50.776 START TEST nvmf_multitarget 00:10:50.776 ************************************ 00:10:50.776 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:50.776 * Looking for test storage... 00:10:50.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:50.776 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:50.776 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:10:50.776 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:51.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.100 --rc genhtml_branch_coverage=1 00:10:51.100 --rc genhtml_function_coverage=1 00:10:51.100 --rc genhtml_legend=1 00:10:51.100 --rc geninfo_all_blocks=1 00:10:51.100 --rc geninfo_unexecuted_blocks=1 00:10:51.100 00:10:51.100 ' 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:51.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.100 --rc genhtml_branch_coverage=1 00:10:51.100 --rc genhtml_function_coverage=1 00:10:51.100 --rc genhtml_legend=1 00:10:51.100 --rc geninfo_all_blocks=1 00:10:51.100 --rc geninfo_unexecuted_blocks=1 00:10:51.100 00:10:51.100 ' 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:51.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.100 --rc genhtml_branch_coverage=1 00:10:51.100 --rc genhtml_function_coverage=1 00:10:51.100 --rc genhtml_legend=1 00:10:51.100 --rc geninfo_all_blocks=1 00:10:51.100 --rc geninfo_unexecuted_blocks=1 00:10:51.100 00:10:51.100 ' 00:10:51.100 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:51.100 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:51.100 --rc genhtml_branch_coverage=1 00:10:51.101 --rc genhtml_function_coverage=1 00:10:51.101 --rc genhtml_legend=1 00:10:51.101 --rc geninfo_all_blocks=1 00:10:51.101 --rc geninfo_unexecuted_blocks=1 00:10:51.101 00:10:51.101 ' 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:51.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:10:51.101 04:01:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:53.230 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:53.230 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.230 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:53.231 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:53.231 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:53.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:10:53.231 00:10:53.231 --- 10.0.0.2 ping statistics --- 00:10:53.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.231 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:10:53.231 00:10:53.231 --- 10.0.0.1 ping statistics --- 00:10:53.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.231 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=183669 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 183669 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 183669 ']' 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:53.231 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:53.231 [2024-12-09 04:01:21.726665] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:10:53.231 [2024-12-09 04:01:21.726762] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.231 [2024-12-09 04:01:21.799923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.510 [2024-12-09 04:01:21.861287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.510 [2024-12-09 04:01:21.861361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.510 [2024-12-09 04:01:21.861389] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.510 [2024-12-09 04:01:21.861401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.510 [2024-12-09 04:01:21.861411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.510 [2024-12-09 04:01:21.863020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.510 [2024-12-09 04:01:21.863085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.510 [2024-12-09 04:01:21.863150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.510 [2024-12-09 04:01:21.863154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.510 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:53.510 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:10:53.510 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:53.510 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:53.510 04:01:21 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:53.511 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.511 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:53.511 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:53.511 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:53.774 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:53.774 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:53.774 "nvmf_tgt_1" 00:10:53.774 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:54.032 "nvmf_tgt_2" 00:10:54.032 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:54.032 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:54.032 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:54.032 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:54.032 true 00:10:54.289 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:54.289 true 00:10:54.289 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:54.289 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:54.289 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:54.289 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:54.289 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:54.289 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:54.289 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:10:54.289 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:54.289 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:10:54.289 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:54.289 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:54.289 rmmod nvme_tcp 00:10:54.289 rmmod nvme_fabrics 00:10:54.547 rmmod nvme_keyring 00:10:54.547 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:54.547 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:10:54.547 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:10:54.547 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 183669 ']' 00:10:54.547 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 183669 00:10:54.547 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 183669 ']' 00:10:54.547 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 183669 00:10:54.547 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:10:54.547 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.547 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 183669 00:10:54.547 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:54.547 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:54.547 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 183669' 00:10:54.547 killing process with pid 183669 00:10:54.547 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 183669 00:10:54.547 04:01:22 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 183669 00:10:54.807 04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:54.808 04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:54.808 04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:54.808 04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:10:54.808 04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:10:54.808 04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:54.808 04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:10:54.808 04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:54.808 04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:54.808 04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.808 04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.808 04:01:23 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.717 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:56.717 00:10:56.717 real 0m6.001s 00:10:56.717 user 0m6.864s 00:10:56.717 sys 0m2.052s 00:10:56.717 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:56.717 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:56.717 ************************************ 00:10:56.717 END TEST nvmf_multitarget 00:10:56.717 ************************************ 00:10:56.717 04:01:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:56.717 04:01:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:56.717 04:01:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:56.717 04:01:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:56.717 ************************************ 00:10:56.717 START TEST nvmf_rpc 00:10:56.717 ************************************ 00:10:56.717 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:56.977 * Looking for test storage... 00:10:56.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:56.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.977 --rc genhtml_branch_coverage=1 00:10:56.977 --rc genhtml_function_coverage=1 00:10:56.977 --rc genhtml_legend=1 00:10:56.977 --rc geninfo_all_blocks=1 00:10:56.977 --rc geninfo_unexecuted_blocks=1 00:10:56.977 00:10:56.977 ' 00:10:56.977 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:56.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.977 --rc genhtml_branch_coverage=1 00:10:56.977 --rc genhtml_function_coverage=1 00:10:56.977 --rc genhtml_legend=1 00:10:56.977 --rc geninfo_all_blocks=1 00:10:56.977 --rc geninfo_unexecuted_blocks=1 00:10:56.977 00:10:56.978 ' 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:56.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.978 --rc genhtml_branch_coverage=1 00:10:56.978 --rc genhtml_function_coverage=1 00:10:56.978 --rc genhtml_legend=1 00:10:56.978 --rc geninfo_all_blocks=1 00:10:56.978 --rc geninfo_unexecuted_blocks=1 00:10:56.978 00:10:56.978 ' 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:56.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:56.978 --rc genhtml_branch_coverage=1 00:10:56.978 --rc genhtml_function_coverage=1 00:10:56.978 --rc genhtml_legend=1 00:10:56.978 --rc geninfo_all_blocks=1 00:10:56.978 --rc geninfo_unexecuted_blocks=1 00:10:56.978 00:10:56.978 ' 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:56.978 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:10:56.978 04:01:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:59.519 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:59.519 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:59.519 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:59.520 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:59.520 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:59.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:59.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:10:59.520 00:10:59.520 --- 10.0.0.2 ping statistics --- 00:10:59.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.520 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:59.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:59.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:10:59.520 00:10:59.520 --- 10.0.0.1 ping statistics --- 00:10:59.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.520 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=185786 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 185786 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 185786 ']' 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:59.520 04:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.520 [2024-12-09 04:01:27.768520] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:10:59.520 [2024-12-09 04:01:27.768625] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.520 [2024-12-09 04:01:27.842687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:59.520 [2024-12-09 04:01:27.900020] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:59.520 [2024-12-09 04:01:27.900090] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:59.520 [2024-12-09 04:01:27.900103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:59.520 [2024-12-09 04:01:27.900129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:59.520 [2024-12-09 04:01:27.900139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:59.520 [2024-12-09 04:01:27.901748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.520 [2024-12-09 04:01:27.901827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:59.520 [2024-12-09 04:01:27.901772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.520 [2024-12-09 04:01:27.901831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.520 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.520 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:59.520 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:59.520 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:59.520 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.520 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.520 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:59.520 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.520 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.520 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.520 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:59.520 "tick_rate": 2700000000, 00:10:59.520 "poll_groups": [ 00:10:59.520 { 00:10:59.521 "name": "nvmf_tgt_poll_group_000", 00:10:59.521 "admin_qpairs": 0, 00:10:59.521 "io_qpairs": 0, 00:10:59.521 "current_admin_qpairs": 0, 00:10:59.521 "current_io_qpairs": 0, 00:10:59.521 "pending_bdev_io": 0, 00:10:59.521 "completed_nvme_io": 0, 00:10:59.521 "transports": [] 00:10:59.521 }, 00:10:59.521 { 00:10:59.521 "name": "nvmf_tgt_poll_group_001", 00:10:59.521 "admin_qpairs": 0, 00:10:59.521 "io_qpairs": 0, 00:10:59.521 "current_admin_qpairs": 0, 00:10:59.521 "current_io_qpairs": 0, 00:10:59.521 "pending_bdev_io": 0, 00:10:59.521 "completed_nvme_io": 0, 00:10:59.521 "transports": [] 00:10:59.521 }, 00:10:59.521 { 00:10:59.521 "name": "nvmf_tgt_poll_group_002", 00:10:59.521 "admin_qpairs": 0, 00:10:59.521 "io_qpairs": 0, 00:10:59.521 "current_admin_qpairs": 0, 00:10:59.521 "current_io_qpairs": 0, 00:10:59.521 "pending_bdev_io": 0, 00:10:59.521 "completed_nvme_io": 0, 00:10:59.521 "transports": [] 00:10:59.521 }, 00:10:59.521 { 00:10:59.521 "name": "nvmf_tgt_poll_group_003", 00:10:59.521 "admin_qpairs": 0, 00:10:59.521 "io_qpairs": 0, 00:10:59.521 "current_admin_qpairs": 0, 00:10:59.521 "current_io_qpairs": 0, 00:10:59.521 "pending_bdev_io": 0, 00:10:59.521 "completed_nvme_io": 0, 00:10:59.521 "transports": [] 00:10:59.521 } 00:10:59.521 ] 00:10:59.521 }' 00:10:59.521 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:59.521 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:59.521 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:59.521 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:59.779 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:59.779 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:59.779 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:59.779 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:59.779 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.779 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.779 [2024-12-09 04:01:28.151587] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:59.779 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.779 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:59.779 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.779 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.779 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.779 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:59.779 "tick_rate": 2700000000, 00:10:59.779 "poll_groups": [ 00:10:59.779 { 00:10:59.779 "name": "nvmf_tgt_poll_group_000", 00:10:59.779 "admin_qpairs": 0, 00:10:59.779 "io_qpairs": 0, 00:10:59.779 "current_admin_qpairs": 0, 00:10:59.779 "current_io_qpairs": 0, 00:10:59.779 "pending_bdev_io": 0, 00:10:59.779 "completed_nvme_io": 0, 00:10:59.779 "transports": [ 00:10:59.779 { 00:10:59.779 "trtype": "TCP" 00:10:59.779 } 00:10:59.779 ] 00:10:59.779 }, 00:10:59.779 { 00:10:59.779 "name": "nvmf_tgt_poll_group_001", 00:10:59.779 "admin_qpairs": 0, 00:10:59.779 "io_qpairs": 0, 00:10:59.779 "current_admin_qpairs": 0, 00:10:59.779 "current_io_qpairs": 0, 00:10:59.779 "pending_bdev_io": 0, 00:10:59.779 "completed_nvme_io": 0, 00:10:59.779 "transports": [ 00:10:59.779 { 00:10:59.779 "trtype": "TCP" 00:10:59.779 } 00:10:59.779 ] 00:10:59.779 }, 00:10:59.779 { 00:10:59.779 "name": "nvmf_tgt_poll_group_002", 00:10:59.779 "admin_qpairs": 0, 00:10:59.779 "io_qpairs": 0, 00:10:59.779 "current_admin_qpairs": 0, 00:10:59.779 "current_io_qpairs": 0, 00:10:59.780 "pending_bdev_io": 0, 00:10:59.780 "completed_nvme_io": 0, 00:10:59.780 "transports": [ 00:10:59.780 { 00:10:59.780 "trtype": "TCP" 00:10:59.780 } 00:10:59.780 ] 00:10:59.780 }, 00:10:59.780 { 00:10:59.780 "name": "nvmf_tgt_poll_group_003", 00:10:59.780 "admin_qpairs": 0, 00:10:59.780 "io_qpairs": 0, 00:10:59.780 "current_admin_qpairs": 0, 00:10:59.780 "current_io_qpairs": 0, 00:10:59.780 "pending_bdev_io": 0, 00:10:59.780 "completed_nvme_io": 0, 00:10:59.780 "transports": [ 00:10:59.780 { 00:10:59.780 "trtype": "TCP" 00:10:59.780 } 00:10:59.780 ] 00:10:59.780 } 00:10:59.780 ] 00:10:59.780 }' 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.780 Malloc1 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.780 [2024-12-09 04:01:28.323350] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:10:59.780 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:10:59.780 [2024-12-09 04:01:28.346068] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:00.037 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:00.037 could not add new controller: failed to write to nvme-fabrics device 00:11:00.037 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:00.037 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:00.038 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:00.038 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:00.038 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:00.038 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.038 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.038 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.038 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:00.602 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:00.602 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:00.602 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:00.602 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:00.602 04:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:02.498 04:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:02.498 04:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:02.498 04:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:02.498 04:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:02.498 04:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:02.498 04:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:02.498 04:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:02.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.498 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:02.498 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:02.498 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:02.498 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:02.756 [2024-12-09 04:01:31.125714] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:02.756 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:02.756 could not add new controller: failed to write to nvme-fabrics device 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:02.756 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:03.323 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:03.323 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:03.323 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:03.323 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:03.323 04:01:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:05.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.852 [2024-12-09 04:01:33.956496] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.852 04:01:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:06.110 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:06.110 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:06.110 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.110 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:06.110 04:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:08.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.645 [2024-12-09 04:01:36.794847] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.645 04:01:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:09.211 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:09.211 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:09.211 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:09.211 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:09.211 04:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:11.106 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.106 [2024-12-09 04:01:39.628203] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.106 04:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:12.038 04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:12.038 04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:12.038 04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:12.038 04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:12.038 04:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:13.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.939 [2024-12-09 04:01:42.446373] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.939 04:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:14.505 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:14.505 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:14.505 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:14.505 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:14.505 04:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:17.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.030 [2024-12-09 04:01:45.217969] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.030 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.031 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:17.287 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:17.287 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:17.287 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:17.287 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:17.287 04:01:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:19.819 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.819 [2024-12-09 04:01:47.979047] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:19.819 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.820 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.820 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.820 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.820 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.820 04:01:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.820 [2024-12-09 04:01:48.027115] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.820 [2024-12-09 04:01:48.075285] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.820 [2024-12-09 04:01:48.123473] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.820 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.821 [2024-12-09 04:01:48.171655] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:19.821 "tick_rate": 2700000000, 00:11:19.821 "poll_groups": [ 00:11:19.821 { 00:11:19.821 "name": "nvmf_tgt_poll_group_000", 00:11:19.821 "admin_qpairs": 2, 00:11:19.821 "io_qpairs": 84, 00:11:19.821 "current_admin_qpairs": 0, 00:11:19.821 "current_io_qpairs": 0, 00:11:19.821 "pending_bdev_io": 0, 00:11:19.821 "completed_nvme_io": 183, 00:11:19.821 "transports": [ 00:11:19.821 { 00:11:19.821 "trtype": "TCP" 00:11:19.821 } 00:11:19.821 ] 00:11:19.821 }, 00:11:19.821 { 00:11:19.821 "name": "nvmf_tgt_poll_group_001", 00:11:19.821 "admin_qpairs": 2, 00:11:19.821 "io_qpairs": 84, 00:11:19.821 "current_admin_qpairs": 0, 00:11:19.821 "current_io_qpairs": 0, 00:11:19.821 "pending_bdev_io": 0, 00:11:19.821 "completed_nvme_io": 156, 00:11:19.821 "transports": [ 00:11:19.821 { 00:11:19.821 "trtype": "TCP" 00:11:19.821 } 00:11:19.821 ] 00:11:19.821 }, 00:11:19.821 { 00:11:19.821 "name": "nvmf_tgt_poll_group_002", 00:11:19.821 "admin_qpairs": 1, 00:11:19.821 "io_qpairs": 84, 00:11:19.821 "current_admin_qpairs": 0, 00:11:19.821 "current_io_qpairs": 0, 00:11:19.821 "pending_bdev_io": 0, 00:11:19.821 "completed_nvme_io": 163, 00:11:19.821 "transports": [ 00:11:19.821 { 00:11:19.821 "trtype": "TCP" 00:11:19.821 } 00:11:19.821 ] 00:11:19.821 }, 00:11:19.821 { 00:11:19.821 "name": "nvmf_tgt_poll_group_003", 00:11:19.821 "admin_qpairs": 2, 00:11:19.821 "io_qpairs": 84, 00:11:19.821 "current_admin_qpairs": 0, 00:11:19.821 "current_io_qpairs": 0, 00:11:19.821 "pending_bdev_io": 0, 00:11:19.821 "completed_nvme_io": 184, 00:11:19.821 "transports": [ 00:11:19.821 { 00:11:19.821 "trtype": "TCP" 00:11:19.821 } 00:11:19.821 ] 00:11:19.821 } 00:11:19.821 ] 00:11:19.821 }' 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:19.821 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:19.822 rmmod nvme_tcp 00:11:19.822 rmmod nvme_fabrics 00:11:19.822 rmmod nvme_keyring 00:11:19.822 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:19.822 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:11:19.822 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:11:19.822 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 185786 ']' 00:11:19.822 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 185786 00:11:19.822 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 185786 ']' 00:11:19.822 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 185786 00:11:19.822 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:19.822 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.822 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 185786 00:11:20.080 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.080 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.080 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 185786' 00:11:20.080 killing process with pid 185786 00:11:20.080 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 185786 00:11:20.080 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 185786 00:11:20.341 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:20.341 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:20.341 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:20.341 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:11:20.341 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:11:20.341 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:20.341 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:11:20.341 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:20.341 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:20.341 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:20.341 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:20.341 04:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.244 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:22.244 00:11:22.244 real 0m25.446s 00:11:22.244 user 1m22.240s 00:11:22.244 sys 0m4.446s 00:11:22.244 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.244 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.244 ************************************ 00:11:22.244 END TEST nvmf_rpc 00:11:22.244 ************************************ 00:11:22.244 04:01:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:22.244 04:01:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:22.244 04:01:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.244 04:01:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:22.244 ************************************ 00:11:22.244 START TEST nvmf_invalid 00:11:22.244 ************************************ 00:11:22.244 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:22.244 * Looking for test storage... 00:11:22.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.244 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:22.244 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:11:22.244 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.502 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:22.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.502 --rc genhtml_branch_coverage=1 00:11:22.502 --rc genhtml_function_coverage=1 00:11:22.502 --rc genhtml_legend=1 00:11:22.503 --rc geninfo_all_blocks=1 00:11:22.503 --rc geninfo_unexecuted_blocks=1 00:11:22.503 00:11:22.503 ' 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:22.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.503 --rc genhtml_branch_coverage=1 00:11:22.503 --rc genhtml_function_coverage=1 00:11:22.503 --rc genhtml_legend=1 00:11:22.503 --rc geninfo_all_blocks=1 00:11:22.503 --rc geninfo_unexecuted_blocks=1 00:11:22.503 00:11:22.503 ' 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:22.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.503 --rc genhtml_branch_coverage=1 00:11:22.503 --rc genhtml_function_coverage=1 00:11:22.503 --rc genhtml_legend=1 00:11:22.503 --rc geninfo_all_blocks=1 00:11:22.503 --rc geninfo_unexecuted_blocks=1 00:11:22.503 00:11:22.503 ' 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:22.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.503 --rc genhtml_branch_coverage=1 00:11:22.503 --rc genhtml_function_coverage=1 00:11:22.503 --rc genhtml_legend=1 00:11:22.503 --rc geninfo_all_blocks=1 00:11:22.503 --rc geninfo_unexecuted_blocks=1 00:11:22.503 00:11:22.503 ' 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.503 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:11:22.503 04:01:50 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:25.032 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:25.032 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:25.032 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:25.033 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:25.033 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:25.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:11:25.033 00:11:25.033 --- 10.0.0.2 ping statistics --- 00:11:25.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.033 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:25.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:11:25.033 00:11:25.033 --- 10.0.0.1 ping statistics --- 00:11:25.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.033 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=190290 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 190290 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 190290 ']' 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:25.033 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:25.033 [2024-12-09 04:01:53.357192] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:11:25.033 [2024-12-09 04:01:53.357296] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.033 [2024-12-09 04:01:53.430489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:25.033 [2024-12-09 04:01:53.487701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.033 [2024-12-09 04:01:53.487760] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.033 [2024-12-09 04:01:53.487790] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.033 [2024-12-09 04:01:53.487801] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.033 [2024-12-09 04:01:53.487810] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.033 [2024-12-09 04:01:53.489392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:25.033 [2024-12-09 04:01:53.489421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.033 [2024-12-09 04:01:53.489450] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.033 [2024-12-09 04:01:53.489454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.291 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.291 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:25.291 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:25.291 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:25.291 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:25.291 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:25.291 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:25.291 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode22530 00:11:25.547 [2024-12-09 04:01:53.929365] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:25.547 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:25.547 { 00:11:25.547 "nqn": "nqn.2016-06.io.spdk:cnode22530", 00:11:25.547 "tgt_name": "foobar", 00:11:25.547 "method": "nvmf_create_subsystem", 00:11:25.547 "req_id": 1 00:11:25.547 } 00:11:25.547 Got JSON-RPC error response 00:11:25.547 response: 00:11:25.547 { 00:11:25.547 "code": -32603, 00:11:25.547 "message": "Unable to find target foobar" 00:11:25.547 }' 00:11:25.547 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:25.547 { 00:11:25.547 "nqn": "nqn.2016-06.io.spdk:cnode22530", 00:11:25.547 "tgt_name": "foobar", 00:11:25.547 "method": "nvmf_create_subsystem", 00:11:25.547 "req_id": 1 00:11:25.547 } 00:11:25.547 Got JSON-RPC error response 00:11:25.547 response: 00:11:25.547 { 00:11:25.547 "code": -32603, 00:11:25.547 "message": "Unable to find target foobar" 00:11:25.547 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:25.547 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:25.547 04:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode7522 00:11:25.805 [2024-12-09 04:01:54.198302] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7522: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:25.805 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:25.805 { 00:11:25.805 "nqn": "nqn.2016-06.io.spdk:cnode7522", 00:11:25.805 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:25.805 "method": "nvmf_create_subsystem", 00:11:25.805 "req_id": 1 00:11:25.805 } 00:11:25.805 Got JSON-RPC error response 00:11:25.805 response: 00:11:25.805 { 00:11:25.805 "code": -32602, 00:11:25.805 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:25.805 }' 00:11:25.805 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:25.805 { 00:11:25.805 "nqn": "nqn.2016-06.io.spdk:cnode7522", 00:11:25.805 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:25.805 "method": "nvmf_create_subsystem", 00:11:25.805 "req_id": 1 00:11:25.805 } 00:11:25.805 Got JSON-RPC error response 00:11:25.805 response: 00:11:25.805 { 00:11:25.805 "code": -32602, 00:11:25.805 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:25.805 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:25.805 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:25.805 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31348 00:11:26.063 [2024-12-09 04:01:54.523358] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31348: invalid model number 'SPDK_Controller' 00:11:26.063 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:26.063 { 00:11:26.063 "nqn": "nqn.2016-06.io.spdk:cnode31348", 00:11:26.063 "model_number": "SPDK_Controller\u001f", 00:11:26.063 "method": "nvmf_create_subsystem", 00:11:26.063 "req_id": 1 00:11:26.063 } 00:11:26.063 Got JSON-RPC error response 00:11:26.063 response: 00:11:26.063 { 00:11:26.063 "code": -32602, 00:11:26.063 "message": "Invalid MN SPDK_Controller\u001f" 00:11:26.063 }' 00:11:26.063 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:26.063 { 00:11:26.063 "nqn": "nqn.2016-06.io.spdk:cnode31348", 00:11:26.063 "model_number": "SPDK_Controller\u001f", 00:11:26.063 "method": "nvmf_create_subsystem", 00:11:26.063 "req_id": 1 00:11:26.063 } 00:11:26.063 Got JSON-RPC error response 00:11:26.063 response: 00:11:26.063 { 00:11:26.063 "code": -32602, 00:11:26.063 "message": "Invalid MN SPDK_Controller\u001f" 00:11:26.063 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:26.063 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:26.063 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:11:26.064 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:26.065 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:11:26.065 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.065 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.065 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:26.065 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:26.065 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:26.065 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.065 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.065 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:11:26.065 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:26.065 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:11:26.065 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.065 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.065 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ h == \- ]] 00:11:26.065 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'h7~qP>``uPl%%h9\8GdoV' 00:11:26.065 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'h7~qP>``uPl%%h9\8GdoV' nqn.2016-06.io.spdk:cnode2844 00:11:26.322 [2024-12-09 04:01:54.884505] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2844: invalid serial number 'h7~qP>``uPl%%h9\8GdoV' 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:26.581 { 00:11:26.581 "nqn": "nqn.2016-06.io.spdk:cnode2844", 00:11:26.581 "serial_number": "h7~qP>``uPl%%h9\\8GdoV", 00:11:26.581 "method": "nvmf_create_subsystem", 00:11:26.581 "req_id": 1 00:11:26.581 } 00:11:26.581 Got JSON-RPC error response 00:11:26.581 response: 00:11:26.581 { 00:11:26.581 "code": -32602, 00:11:26.581 "message": "Invalid SN h7~qP>``uPl%%h9\\8GdoV" 00:11:26.581 }' 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:26.581 { 00:11:26.581 "nqn": "nqn.2016-06.io.spdk:cnode2844", 00:11:26.581 "serial_number": "h7~qP>``uPl%%h9\\8GdoV", 00:11:26.581 "method": "nvmf_create_subsystem", 00:11:26.581 "req_id": 1 00:11:26.581 } 00:11:26.581 Got JSON-RPC error response 00:11:26.581 response: 00:11:26.581 { 00:11:26.581 "code": -32602, 00:11:26.581 "message": "Invalid SN h7~qP>``uPl%%h9\\8GdoV" 00:11:26.581 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:11:26.581 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.582 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:26.583 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:26.583 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:26.583 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.583 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.583 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:26.583 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:26.583 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:26.583 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:26.583 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:26.583 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ u == \- ]] 00:11:26.583 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'u{D_rEvE>& 011SQ7~A`%}^5Sg%TTz<1d(w:wD )' 00:11:26.583 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'u{D_rEvE>& 011SQ7~A`%}^5Sg%TTz<1d(w:wD )' nqn.2016-06.io.spdk:cnode19814 00:11:26.839 [2024-12-09 04:01:55.269764] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19814: invalid model number 'u{D_rEvE>& 011SQ7~A`%}^5Sg%TTz<1d(w:wD )' 00:11:26.839 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:26.839 { 00:11:26.839 "nqn": "nqn.2016-06.io.spdk:cnode19814", 00:11:26.839 "model_number": "u{D_rEvE>& 011SQ\u007f7~A`%}^5Sg%TTz<1d(w:wD )", 00:11:26.839 "method": "nvmf_create_subsystem", 00:11:26.839 "req_id": 1 00:11:26.839 } 00:11:26.839 Got JSON-RPC error response 00:11:26.839 response: 00:11:26.839 { 00:11:26.839 "code": -32602, 00:11:26.839 "message": "Invalid MN u{D_rEvE>& 011SQ\u007f7~A`%}^5Sg%TTz<1d(w:wD )" 00:11:26.839 }' 00:11:26.839 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:26.839 { 00:11:26.839 "nqn": "nqn.2016-06.io.spdk:cnode19814", 00:11:26.839 "model_number": "u{D_rEvE>& 011SQ\u007f7~A`%}^5Sg%TTz<1d(w:wD )", 00:11:26.839 "method": "nvmf_create_subsystem", 00:11:26.839 "req_id": 1 00:11:26.839 } 00:11:26.839 Got JSON-RPC error response 00:11:26.839 response: 00:11:26.839 { 00:11:26.839 "code": -32602, 00:11:26.839 "message": "Invalid MN u{D_rEvE>& 011SQ\u007f7~A`%}^5Sg%TTz<1d(w:wD )" 00:11:26.839 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:26.839 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:27.095 [2024-12-09 04:01:55.538727] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:27.095 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:27.351 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:27.351 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:27.351 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:27.351 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:27.351 04:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:27.608 [2024-12-09 04:01:56.092534] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:27.608 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:11:27.608 { 00:11:27.608 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:27.608 "listen_address": { 00:11:27.608 "trtype": "tcp", 00:11:27.608 "traddr": "", 00:11:27.608 "trsvcid": "4421" 00:11:27.608 }, 00:11:27.608 "method": "nvmf_subsystem_remove_listener", 00:11:27.608 "req_id": 1 00:11:27.608 } 00:11:27.608 Got JSON-RPC error response 00:11:27.608 response: 00:11:27.608 { 00:11:27.608 "code": -32602, 00:11:27.608 "message": "Invalid parameters" 00:11:27.608 }' 00:11:27.608 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:11:27.608 { 00:11:27.608 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:27.608 "listen_address": { 00:11:27.608 "trtype": "tcp", 00:11:27.608 "traddr": "", 00:11:27.608 "trsvcid": "4421" 00:11:27.608 }, 00:11:27.608 "method": "nvmf_subsystem_remove_listener", 00:11:27.608 "req_id": 1 00:11:27.608 } 00:11:27.608 Got JSON-RPC error response 00:11:27.608 response: 00:11:27.608 { 00:11:27.608 "code": -32602, 00:11:27.608 "message": "Invalid parameters" 00:11:27.608 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:27.608 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31283 -i 0 00:11:27.866 [2024-12-09 04:01:56.373431] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31283: invalid cntlid range [0-65519] 00:11:27.866 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:11:27.866 { 00:11:27.866 "nqn": "nqn.2016-06.io.spdk:cnode31283", 00:11:27.866 "min_cntlid": 0, 00:11:27.866 "method": "nvmf_create_subsystem", 00:11:27.866 "req_id": 1 00:11:27.866 } 00:11:27.866 Got JSON-RPC error response 00:11:27.866 response: 00:11:27.866 { 00:11:27.866 "code": -32602, 00:11:27.866 "message": "Invalid cntlid range [0-65519]" 00:11:27.866 }' 00:11:27.866 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:11:27.866 { 00:11:27.866 "nqn": "nqn.2016-06.io.spdk:cnode31283", 00:11:27.866 "min_cntlid": 0, 00:11:27.866 "method": "nvmf_create_subsystem", 00:11:27.866 "req_id": 1 00:11:27.866 } 00:11:27.866 Got JSON-RPC error response 00:11:27.866 response: 00:11:27.866 { 00:11:27.866 "code": -32602, 00:11:27.866 "message": "Invalid cntlid range [0-65519]" 00:11:27.866 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:27.866 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8699 -i 65520 00:11:28.123 [2024-12-09 04:01:56.646405] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8699: invalid cntlid range [65520-65519] 00:11:28.123 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:28.123 { 00:11:28.123 "nqn": "nqn.2016-06.io.spdk:cnode8699", 00:11:28.123 "min_cntlid": 65520, 00:11:28.123 "method": "nvmf_create_subsystem", 00:11:28.123 "req_id": 1 00:11:28.123 } 00:11:28.123 Got JSON-RPC error response 00:11:28.123 response: 00:11:28.123 { 00:11:28.123 "code": -32602, 00:11:28.123 "message": "Invalid cntlid range [65520-65519]" 00:11:28.123 }' 00:11:28.123 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:28.123 { 00:11:28.123 "nqn": "nqn.2016-06.io.spdk:cnode8699", 00:11:28.123 "min_cntlid": 65520, 00:11:28.123 "method": "nvmf_create_subsystem", 00:11:28.123 "req_id": 1 00:11:28.123 } 00:11:28.123 Got JSON-RPC error response 00:11:28.123 response: 00:11:28.123 { 00:11:28.123 "code": -32602, 00:11:28.123 "message": "Invalid cntlid range [65520-65519]" 00:11:28.123 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:28.123 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26540 -I 0 00:11:28.380 [2024-12-09 04:01:56.923315] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26540: invalid cntlid range [1-0] 00:11:28.380 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:28.380 { 00:11:28.380 "nqn": "nqn.2016-06.io.spdk:cnode26540", 00:11:28.380 "max_cntlid": 0, 00:11:28.380 "method": "nvmf_create_subsystem", 00:11:28.380 "req_id": 1 00:11:28.380 } 00:11:28.380 Got JSON-RPC error response 00:11:28.380 response: 00:11:28.380 { 00:11:28.380 "code": -32602, 00:11:28.380 "message": "Invalid cntlid range [1-0]" 00:11:28.380 }' 00:11:28.380 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:28.380 { 00:11:28.380 "nqn": "nqn.2016-06.io.spdk:cnode26540", 00:11:28.380 "max_cntlid": 0, 00:11:28.380 "method": "nvmf_create_subsystem", 00:11:28.380 "req_id": 1 00:11:28.380 } 00:11:28.380 Got JSON-RPC error response 00:11:28.380 response: 00:11:28.380 { 00:11:28.380 "code": -32602, 00:11:28.380 "message": "Invalid cntlid range [1-0]" 00:11:28.380 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:28.380 04:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21269 -I 65520 00:11:28.638 [2024-12-09 04:01:57.184160] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21269: invalid cntlid range [1-65520] 00:11:28.638 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:28.638 { 00:11:28.638 "nqn": "nqn.2016-06.io.spdk:cnode21269", 00:11:28.638 "max_cntlid": 65520, 00:11:28.638 "method": "nvmf_create_subsystem", 00:11:28.638 "req_id": 1 00:11:28.638 } 00:11:28.638 Got JSON-RPC error response 00:11:28.638 response: 00:11:28.638 { 00:11:28.638 "code": -32602, 00:11:28.638 "message": "Invalid cntlid range [1-65520]" 00:11:28.638 }' 00:11:28.638 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:28.638 { 00:11:28.638 "nqn": "nqn.2016-06.io.spdk:cnode21269", 00:11:28.638 "max_cntlid": 65520, 00:11:28.638 "method": "nvmf_create_subsystem", 00:11:28.638 "req_id": 1 00:11:28.638 } 00:11:28.638 Got JSON-RPC error response 00:11:28.638 response: 00:11:28.638 { 00:11:28.638 "code": -32602, 00:11:28.638 "message": "Invalid cntlid range [1-65520]" 00:11:28.638 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:28.638 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5812 -i 6 -I 5 00:11:28.895 [2024-12-09 04:01:57.461126] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5812: invalid cntlid range [6-5] 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:29.152 { 00:11:29.152 "nqn": "nqn.2016-06.io.spdk:cnode5812", 00:11:29.152 "min_cntlid": 6, 00:11:29.152 "max_cntlid": 5, 00:11:29.152 "method": "nvmf_create_subsystem", 00:11:29.152 "req_id": 1 00:11:29.152 } 00:11:29.152 Got JSON-RPC error response 00:11:29.152 response: 00:11:29.152 { 00:11:29.152 "code": -32602, 00:11:29.152 "message": "Invalid cntlid range [6-5]" 00:11:29.152 }' 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:29.152 { 00:11:29.152 "nqn": "nqn.2016-06.io.spdk:cnode5812", 00:11:29.152 "min_cntlid": 6, 00:11:29.152 "max_cntlid": 5, 00:11:29.152 "method": "nvmf_create_subsystem", 00:11:29.152 "req_id": 1 00:11:29.152 } 00:11:29.152 Got JSON-RPC error response 00:11:29.152 response: 00:11:29.152 { 00:11:29.152 "code": -32602, 00:11:29.152 "message": "Invalid cntlid range [6-5]" 00:11:29.152 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:29.152 { 00:11:29.152 "name": "foobar", 00:11:29.152 "method": "nvmf_delete_target", 00:11:29.152 "req_id": 1 00:11:29.152 } 00:11:29.152 Got JSON-RPC error response 00:11:29.152 response: 00:11:29.152 { 00:11:29.152 "code": -32602, 00:11:29.152 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:29.152 }' 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:29.152 { 00:11:29.152 "name": "foobar", 00:11:29.152 "method": "nvmf_delete_target", 00:11:29.152 "req_id": 1 00:11:29.152 } 00:11:29.152 Got JSON-RPC error response 00:11:29.152 response: 00:11:29.152 { 00:11:29.152 "code": -32602, 00:11:29.152 "message": "The specified target doesn't exist, cannot delete it." 00:11:29.152 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:29.152 rmmod nvme_tcp 00:11:29.152 rmmod nvme_fabrics 00:11:29.152 rmmod nvme_keyring 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 190290 ']' 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 190290 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 190290 ']' 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 190290 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 190290 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 190290' 00:11:29.152 killing process with pid 190290 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 190290 00:11:29.152 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 190290 00:11:29.409 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:29.409 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:29.409 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:29.409 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:11:29.409 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:11:29.409 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:29.410 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:11:29.410 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:29.410 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:29.410 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.410 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:29.410 04:01:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.948 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:31.948 00:11:31.948 real 0m9.200s 00:11:31.948 user 0m21.819s 00:11:31.948 sys 0m2.596s 00:11:31.948 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.948 04:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:31.948 ************************************ 00:11:31.948 END TEST nvmf_invalid 00:11:31.948 ************************************ 00:11:31.948 04:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:31.948 04:01:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:31.948 04:01:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:31.948 04:01:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:31.948 ************************************ 00:11:31.948 START TEST nvmf_connect_stress 00:11:31.948 ************************************ 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:31.948 * Looking for test storage... 00:11:31.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:31.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.948 --rc genhtml_branch_coverage=1 00:11:31.948 --rc genhtml_function_coverage=1 00:11:31.948 --rc genhtml_legend=1 00:11:31.948 --rc geninfo_all_blocks=1 00:11:31.948 --rc geninfo_unexecuted_blocks=1 00:11:31.948 00:11:31.948 ' 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:31.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.948 --rc genhtml_branch_coverage=1 00:11:31.948 --rc genhtml_function_coverage=1 00:11:31.948 --rc genhtml_legend=1 00:11:31.948 --rc geninfo_all_blocks=1 00:11:31.948 --rc geninfo_unexecuted_blocks=1 00:11:31.948 00:11:31.948 ' 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:31.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.948 --rc genhtml_branch_coverage=1 00:11:31.948 --rc genhtml_function_coverage=1 00:11:31.948 --rc genhtml_legend=1 00:11:31.948 --rc geninfo_all_blocks=1 00:11:31.948 --rc geninfo_unexecuted_blocks=1 00:11:31.948 00:11:31.948 ' 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:31.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:31.948 --rc genhtml_branch_coverage=1 00:11:31.948 --rc genhtml_function_coverage=1 00:11:31.948 --rc genhtml_legend=1 00:11:31.948 --rc geninfo_all_blocks=1 00:11:31.948 --rc geninfo_unexecuted_blocks=1 00:11:31.948 00:11:31.948 ' 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.948 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:31.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:11:31.949 04:02:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:33.854 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:33.855 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:33.855 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:33.855 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:33.855 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:33.855 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:34.143 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.143 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:11:34.143 00:11:34.143 --- 10.0.0.2 ping statistics --- 00:11:34.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.143 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.143 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.143 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:11:34.143 00:11:34.143 --- 10.0.0.1 ping statistics --- 00:11:34.143 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.143 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=193049 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 193049 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 193049 ']' 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:34.143 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.143 [2024-12-09 04:02:02.526250] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:11:34.143 [2024-12-09 04:02:02.526340] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.143 [2024-12-09 04:02:02.595969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:34.143 [2024-12-09 04:02:02.650242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.143 [2024-12-09 04:02:02.650306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.143 [2024-12-09 04:02:02.650334] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.143 [2024-12-09 04:02:02.650345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.143 [2024-12-09 04:02:02.650354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.143 [2024-12-09 04:02:02.651856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.143 [2024-12-09 04:02:02.651918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.143 [2024-12-09 04:02:02.651922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.403 [2024-12-09 04:02:02.798410] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.403 [2024-12-09 04:02:02.815743] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.403 NULL1 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=193074 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.403 04:02:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.660 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.660 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:34.660 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.660 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.660 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.224 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.224 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:35.224 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.224 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.224 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.481 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.481 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:35.481 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.481 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.481 04:02:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.738 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.738 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:35.738 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.738 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.738 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.996 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.996 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:35.996 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.996 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.996 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.293 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.293 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:36.293 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.293 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.293 04:02:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.551 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.551 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:36.551 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.551 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.551 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.115 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.115 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:37.115 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.115 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.115 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.373 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.373 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:37.373 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.373 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.373 04:02:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.630 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.630 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:37.630 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.630 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.630 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.888 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.888 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:37.888 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.888 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.888 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.455 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.455 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:38.455 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.455 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.455 04:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.713 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.713 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:38.713 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.713 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.713 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.971 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.971 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:38.971 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.971 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.971 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.230 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.230 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:39.230 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.230 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.230 04:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.489 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.489 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:39.489 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:39.489 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.489 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.055 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.055 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:40.055 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.055 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.055 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.314 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.314 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:40.314 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.314 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.314 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.572 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.572 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:40.572 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.572 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.572 04:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:40.830 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.830 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:40.830 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:40.830 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.830 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.088 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.088 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:41.088 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.088 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.088 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.656 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.656 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:41.656 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.656 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.656 04:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.914 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.914 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:41.914 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:41.914 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.914 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.172 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.172 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:42.172 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.172 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.172 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.430 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.430 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:42.430 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.430 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.430 04:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:42.688 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.688 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:42.688 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:42.688 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.688 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.254 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.254 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:43.254 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.254 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.254 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.513 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.513 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:43.513 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.513 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.513 04:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:43.772 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.772 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:43.772 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:43.772 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.772 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.031 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.031 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:44.031 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.031 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.031 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.289 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.289 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:44.289 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:44.289 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.289 04:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:44.547 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 193074 00:11:44.806 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (193074) - No such process 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 193074 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:44.806 rmmod nvme_tcp 00:11:44.806 rmmod nvme_fabrics 00:11:44.806 rmmod nvme_keyring 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 193049 ']' 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 193049 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 193049 ']' 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 193049 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 193049 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 193049' 00:11:44.806 killing process with pid 193049 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 193049 00:11:44.806 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 193049 00:11:45.066 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:45.066 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:45.067 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:45.067 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:11:45.067 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:11:45.067 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:45.067 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:11:45.067 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:45.067 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:45.067 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.067 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.067 04:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.977 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:46.977 00:11:46.977 real 0m15.508s 00:11:46.977 user 0m39.896s 00:11:46.977 sys 0m4.711s 00:11:46.977 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.977 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:46.977 ************************************ 00:11:46.977 END TEST nvmf_connect_stress 00:11:46.977 ************************************ 00:11:46.977 04:02:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:46.977 04:02:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:46.977 04:02:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.977 04:02:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:47.236 ************************************ 00:11:47.236 START TEST nvmf_fused_ordering 00:11:47.236 ************************************ 00:11:47.236 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:47.236 * Looking for test storage... 00:11:47.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.236 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:47.236 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:11:47.236 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:47.236 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:47.236 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.236 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.236 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.236 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.236 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.236 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.236 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.236 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.236 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.236 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.236 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.236 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:11:47.236 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:47.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.237 --rc genhtml_branch_coverage=1 00:11:47.237 --rc genhtml_function_coverage=1 00:11:47.237 --rc genhtml_legend=1 00:11:47.237 --rc geninfo_all_blocks=1 00:11:47.237 --rc geninfo_unexecuted_blocks=1 00:11:47.237 00:11:47.237 ' 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:47.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.237 --rc genhtml_branch_coverage=1 00:11:47.237 --rc genhtml_function_coverage=1 00:11:47.237 --rc genhtml_legend=1 00:11:47.237 --rc geninfo_all_blocks=1 00:11:47.237 --rc geninfo_unexecuted_blocks=1 00:11:47.237 00:11:47.237 ' 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:47.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.237 --rc genhtml_branch_coverage=1 00:11:47.237 --rc genhtml_function_coverage=1 00:11:47.237 --rc genhtml_legend=1 00:11:47.237 --rc geninfo_all_blocks=1 00:11:47.237 --rc geninfo_unexecuted_blocks=1 00:11:47.237 00:11:47.237 ' 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:47.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.237 --rc genhtml_branch_coverage=1 00:11:47.237 --rc genhtml_function_coverage=1 00:11:47.237 --rc genhtml_legend=1 00:11:47.237 --rc geninfo_all_blocks=1 00:11:47.237 --rc geninfo_unexecuted_blocks=1 00:11:47.237 00:11:47.237 ' 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:47.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:11:47.237 04:02:15 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:49.772 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:49.772 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:11:49.772 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:49.773 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:49.773 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:49.773 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:49.773 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:49.773 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:49.774 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:49.774 04:02:17 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:49.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:11:49.774 00:11:49.774 --- 10.0.0.2 ping statistics --- 00:11:49.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.774 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:49.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:11:49.774 00:11:49.774 --- 10.0.0.1 ping statistics --- 00:11:49.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.774 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=196235 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 196235 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 196235 ']' 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:49.774 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:49.774 [2024-12-09 04:02:18.249966] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:11:49.774 [2024-12-09 04:02:18.250049] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.774 [2024-12-09 04:02:18.323192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.033 [2024-12-09 04:02:18.383008] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.033 [2024-12-09 04:02:18.383056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.033 [2024-12-09 04:02:18.383085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:50.033 [2024-12-09 04:02:18.383096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:50.033 [2024-12-09 04:02:18.383105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.033 [2024-12-09 04:02:18.383770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:50.033 [2024-12-09 04:02:18.520874] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:50.033 [2024-12-09 04:02:18.537099] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:50.033 NULL1 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.033 04:02:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:50.033 [2024-12-09 04:02:18.580951] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:11:50.033 [2024-12-09 04:02:18.580985] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid196371 ] 00:11:50.599 Attached to nqn.2016-06.io.spdk:cnode1 00:11:50.599 Namespace ID: 1 size: 1GB 00:11:50.599 fused_ordering(0) 00:11:50.599 fused_ordering(1) 00:11:50.599 fused_ordering(2) 00:11:50.599 fused_ordering(3) 00:11:50.599 fused_ordering(4) 00:11:50.599 fused_ordering(5) 00:11:50.599 fused_ordering(6) 00:11:50.599 fused_ordering(7) 00:11:50.599 fused_ordering(8) 00:11:50.599 fused_ordering(9) 00:11:50.599 fused_ordering(10) 00:11:50.599 fused_ordering(11) 00:11:50.599 fused_ordering(12) 00:11:50.599 fused_ordering(13) 00:11:50.599 fused_ordering(14) 00:11:50.599 fused_ordering(15) 00:11:50.599 fused_ordering(16) 00:11:50.600 fused_ordering(17) 00:11:50.600 fused_ordering(18) 00:11:50.600 fused_ordering(19) 00:11:50.600 fused_ordering(20) 00:11:50.600 fused_ordering(21) 00:11:50.600 fused_ordering(22) 00:11:50.600 fused_ordering(23) 00:11:50.600 fused_ordering(24) 00:11:50.600 fused_ordering(25) 00:11:50.600 fused_ordering(26) 00:11:50.600 fused_ordering(27) 00:11:50.600 fused_ordering(28) 00:11:50.600 fused_ordering(29) 00:11:50.600 fused_ordering(30) 00:11:50.600 fused_ordering(31) 00:11:50.600 fused_ordering(32) 00:11:50.600 fused_ordering(33) 00:11:50.600 fused_ordering(34) 00:11:50.600 fused_ordering(35) 00:11:50.600 fused_ordering(36) 00:11:50.600 fused_ordering(37) 00:11:50.600 fused_ordering(38) 00:11:50.600 fused_ordering(39) 00:11:50.600 fused_ordering(40) 00:11:50.600 fused_ordering(41) 00:11:50.600 fused_ordering(42) 00:11:50.600 fused_ordering(43) 00:11:50.600 fused_ordering(44) 00:11:50.600 fused_ordering(45) 00:11:50.600 fused_ordering(46) 00:11:50.600 fused_ordering(47) 00:11:50.600 fused_ordering(48) 00:11:50.600 fused_ordering(49) 00:11:50.600 fused_ordering(50) 00:11:50.600 fused_ordering(51) 00:11:50.600 fused_ordering(52) 00:11:50.600 fused_ordering(53) 00:11:50.600 fused_ordering(54) 00:11:50.600 fused_ordering(55) 00:11:50.600 fused_ordering(56) 00:11:50.600 fused_ordering(57) 00:11:50.600 fused_ordering(58) 00:11:50.600 fused_ordering(59) 00:11:50.600 fused_ordering(60) 00:11:50.600 fused_ordering(61) 00:11:50.600 fused_ordering(62) 00:11:50.600 fused_ordering(63) 00:11:50.600 fused_ordering(64) 00:11:50.600 fused_ordering(65) 00:11:50.600 fused_ordering(66) 00:11:50.600 fused_ordering(67) 00:11:50.600 fused_ordering(68) 00:11:50.600 fused_ordering(69) 00:11:50.600 fused_ordering(70) 00:11:50.600 fused_ordering(71) 00:11:50.600 fused_ordering(72) 00:11:50.600 fused_ordering(73) 00:11:50.600 fused_ordering(74) 00:11:50.600 fused_ordering(75) 00:11:50.600 fused_ordering(76) 00:11:50.600 fused_ordering(77) 00:11:50.600 fused_ordering(78) 00:11:50.600 fused_ordering(79) 00:11:50.600 fused_ordering(80) 00:11:50.600 fused_ordering(81) 00:11:50.600 fused_ordering(82) 00:11:50.600 fused_ordering(83) 00:11:50.600 fused_ordering(84) 00:11:50.600 fused_ordering(85) 00:11:50.600 fused_ordering(86) 00:11:50.600 fused_ordering(87) 00:11:50.600 fused_ordering(88) 00:11:50.600 fused_ordering(89) 00:11:50.600 fused_ordering(90) 00:11:50.600 fused_ordering(91) 00:11:50.600 fused_ordering(92) 00:11:50.600 fused_ordering(93) 00:11:50.600 fused_ordering(94) 00:11:50.600 fused_ordering(95) 00:11:50.600 fused_ordering(96) 00:11:50.600 fused_ordering(97) 00:11:50.600 fused_ordering(98) 00:11:50.600 fused_ordering(99) 00:11:50.600 fused_ordering(100) 00:11:50.600 fused_ordering(101) 00:11:50.600 fused_ordering(102) 00:11:50.600 fused_ordering(103) 00:11:50.600 fused_ordering(104) 00:11:50.600 fused_ordering(105) 00:11:50.600 fused_ordering(106) 00:11:50.600 fused_ordering(107) 00:11:50.600 fused_ordering(108) 00:11:50.600 fused_ordering(109) 00:11:50.600 fused_ordering(110) 00:11:50.600 fused_ordering(111) 00:11:50.600 fused_ordering(112) 00:11:50.600 fused_ordering(113) 00:11:50.600 fused_ordering(114) 00:11:50.600 fused_ordering(115) 00:11:50.600 fused_ordering(116) 00:11:50.600 fused_ordering(117) 00:11:50.600 fused_ordering(118) 00:11:50.600 fused_ordering(119) 00:11:50.600 fused_ordering(120) 00:11:50.600 fused_ordering(121) 00:11:50.600 fused_ordering(122) 00:11:50.600 fused_ordering(123) 00:11:50.600 fused_ordering(124) 00:11:50.600 fused_ordering(125) 00:11:50.600 fused_ordering(126) 00:11:50.600 fused_ordering(127) 00:11:50.600 fused_ordering(128) 00:11:50.600 fused_ordering(129) 00:11:50.600 fused_ordering(130) 00:11:50.600 fused_ordering(131) 00:11:50.600 fused_ordering(132) 00:11:50.600 fused_ordering(133) 00:11:50.600 fused_ordering(134) 00:11:50.600 fused_ordering(135) 00:11:50.600 fused_ordering(136) 00:11:50.600 fused_ordering(137) 00:11:50.600 fused_ordering(138) 00:11:50.600 fused_ordering(139) 00:11:50.600 fused_ordering(140) 00:11:50.600 fused_ordering(141) 00:11:50.600 fused_ordering(142) 00:11:50.600 fused_ordering(143) 00:11:50.600 fused_ordering(144) 00:11:50.600 fused_ordering(145) 00:11:50.600 fused_ordering(146) 00:11:50.600 fused_ordering(147) 00:11:50.600 fused_ordering(148) 00:11:50.600 fused_ordering(149) 00:11:50.600 fused_ordering(150) 00:11:50.600 fused_ordering(151) 00:11:50.600 fused_ordering(152) 00:11:50.600 fused_ordering(153) 00:11:50.600 fused_ordering(154) 00:11:50.600 fused_ordering(155) 00:11:50.600 fused_ordering(156) 00:11:50.600 fused_ordering(157) 00:11:50.600 fused_ordering(158) 00:11:50.600 fused_ordering(159) 00:11:50.600 fused_ordering(160) 00:11:50.600 fused_ordering(161) 00:11:50.600 fused_ordering(162) 00:11:50.600 fused_ordering(163) 00:11:50.600 fused_ordering(164) 00:11:50.600 fused_ordering(165) 00:11:50.600 fused_ordering(166) 00:11:50.600 fused_ordering(167) 00:11:50.600 fused_ordering(168) 00:11:50.600 fused_ordering(169) 00:11:50.600 fused_ordering(170) 00:11:50.600 fused_ordering(171) 00:11:50.600 fused_ordering(172) 00:11:50.600 fused_ordering(173) 00:11:50.600 fused_ordering(174) 00:11:50.600 fused_ordering(175) 00:11:50.600 fused_ordering(176) 00:11:50.600 fused_ordering(177) 00:11:50.600 fused_ordering(178) 00:11:50.600 fused_ordering(179) 00:11:50.600 fused_ordering(180) 00:11:50.600 fused_ordering(181) 00:11:50.600 fused_ordering(182) 00:11:50.600 fused_ordering(183) 00:11:50.600 fused_ordering(184) 00:11:50.600 fused_ordering(185) 00:11:50.600 fused_ordering(186) 00:11:50.600 fused_ordering(187) 00:11:50.600 fused_ordering(188) 00:11:50.600 fused_ordering(189) 00:11:50.600 fused_ordering(190) 00:11:50.600 fused_ordering(191) 00:11:50.600 fused_ordering(192) 00:11:50.600 fused_ordering(193) 00:11:50.600 fused_ordering(194) 00:11:50.600 fused_ordering(195) 00:11:50.600 fused_ordering(196) 00:11:50.600 fused_ordering(197) 00:11:50.600 fused_ordering(198) 00:11:50.600 fused_ordering(199) 00:11:50.600 fused_ordering(200) 00:11:50.600 fused_ordering(201) 00:11:50.600 fused_ordering(202) 00:11:50.600 fused_ordering(203) 00:11:50.600 fused_ordering(204) 00:11:50.600 fused_ordering(205) 00:11:50.859 fused_ordering(206) 00:11:50.859 fused_ordering(207) 00:11:50.859 fused_ordering(208) 00:11:50.859 fused_ordering(209) 00:11:50.859 fused_ordering(210) 00:11:50.859 fused_ordering(211) 00:11:50.859 fused_ordering(212) 00:11:50.859 fused_ordering(213) 00:11:50.859 fused_ordering(214) 00:11:50.859 fused_ordering(215) 00:11:50.859 fused_ordering(216) 00:11:50.859 fused_ordering(217) 00:11:50.859 fused_ordering(218) 00:11:50.859 fused_ordering(219) 00:11:50.859 fused_ordering(220) 00:11:50.859 fused_ordering(221) 00:11:50.859 fused_ordering(222) 00:11:50.859 fused_ordering(223) 00:11:50.859 fused_ordering(224) 00:11:50.859 fused_ordering(225) 00:11:50.859 fused_ordering(226) 00:11:50.859 fused_ordering(227) 00:11:50.859 fused_ordering(228) 00:11:50.859 fused_ordering(229) 00:11:50.859 fused_ordering(230) 00:11:50.859 fused_ordering(231) 00:11:50.859 fused_ordering(232) 00:11:50.859 fused_ordering(233) 00:11:50.859 fused_ordering(234) 00:11:50.859 fused_ordering(235) 00:11:50.859 fused_ordering(236) 00:11:50.859 fused_ordering(237) 00:11:50.859 fused_ordering(238) 00:11:50.859 fused_ordering(239) 00:11:50.859 fused_ordering(240) 00:11:50.859 fused_ordering(241) 00:11:50.859 fused_ordering(242) 00:11:50.859 fused_ordering(243) 00:11:50.859 fused_ordering(244) 00:11:50.859 fused_ordering(245) 00:11:50.859 fused_ordering(246) 00:11:50.859 fused_ordering(247) 00:11:50.859 fused_ordering(248) 00:11:50.859 fused_ordering(249) 00:11:50.859 fused_ordering(250) 00:11:50.859 fused_ordering(251) 00:11:50.859 fused_ordering(252) 00:11:50.859 fused_ordering(253) 00:11:50.859 fused_ordering(254) 00:11:50.859 fused_ordering(255) 00:11:50.859 fused_ordering(256) 00:11:50.859 fused_ordering(257) 00:11:50.859 fused_ordering(258) 00:11:50.859 fused_ordering(259) 00:11:50.859 fused_ordering(260) 00:11:50.859 fused_ordering(261) 00:11:50.859 fused_ordering(262) 00:11:50.859 fused_ordering(263) 00:11:50.859 fused_ordering(264) 00:11:50.859 fused_ordering(265) 00:11:50.859 fused_ordering(266) 00:11:50.859 fused_ordering(267) 00:11:50.859 fused_ordering(268) 00:11:50.859 fused_ordering(269) 00:11:50.859 fused_ordering(270) 00:11:50.859 fused_ordering(271) 00:11:50.859 fused_ordering(272) 00:11:50.859 fused_ordering(273) 00:11:50.859 fused_ordering(274) 00:11:50.859 fused_ordering(275) 00:11:50.859 fused_ordering(276) 00:11:50.859 fused_ordering(277) 00:11:50.859 fused_ordering(278) 00:11:50.859 fused_ordering(279) 00:11:50.859 fused_ordering(280) 00:11:50.859 fused_ordering(281) 00:11:50.859 fused_ordering(282) 00:11:50.859 fused_ordering(283) 00:11:50.859 fused_ordering(284) 00:11:50.859 fused_ordering(285) 00:11:50.859 fused_ordering(286) 00:11:50.859 fused_ordering(287) 00:11:50.859 fused_ordering(288) 00:11:50.859 fused_ordering(289) 00:11:50.859 fused_ordering(290) 00:11:50.859 fused_ordering(291) 00:11:50.859 fused_ordering(292) 00:11:50.859 fused_ordering(293) 00:11:50.859 fused_ordering(294) 00:11:50.859 fused_ordering(295) 00:11:50.859 fused_ordering(296) 00:11:50.859 fused_ordering(297) 00:11:50.859 fused_ordering(298) 00:11:50.859 fused_ordering(299) 00:11:50.859 fused_ordering(300) 00:11:50.859 fused_ordering(301) 00:11:50.859 fused_ordering(302) 00:11:50.859 fused_ordering(303) 00:11:50.859 fused_ordering(304) 00:11:50.859 fused_ordering(305) 00:11:50.859 fused_ordering(306) 00:11:50.859 fused_ordering(307) 00:11:50.859 fused_ordering(308) 00:11:50.859 fused_ordering(309) 00:11:50.859 fused_ordering(310) 00:11:50.859 fused_ordering(311) 00:11:50.859 fused_ordering(312) 00:11:50.859 fused_ordering(313) 00:11:50.859 fused_ordering(314) 00:11:50.859 fused_ordering(315) 00:11:50.859 fused_ordering(316) 00:11:50.859 fused_ordering(317) 00:11:50.859 fused_ordering(318) 00:11:50.859 fused_ordering(319) 00:11:50.859 fused_ordering(320) 00:11:50.859 fused_ordering(321) 00:11:50.859 fused_ordering(322) 00:11:50.859 fused_ordering(323) 00:11:50.859 fused_ordering(324) 00:11:50.859 fused_ordering(325) 00:11:50.859 fused_ordering(326) 00:11:50.859 fused_ordering(327) 00:11:50.859 fused_ordering(328) 00:11:50.859 fused_ordering(329) 00:11:50.859 fused_ordering(330) 00:11:50.859 fused_ordering(331) 00:11:50.859 fused_ordering(332) 00:11:50.859 fused_ordering(333) 00:11:50.859 fused_ordering(334) 00:11:50.859 fused_ordering(335) 00:11:50.859 fused_ordering(336) 00:11:50.859 fused_ordering(337) 00:11:50.859 fused_ordering(338) 00:11:50.859 fused_ordering(339) 00:11:50.859 fused_ordering(340) 00:11:50.859 fused_ordering(341) 00:11:50.859 fused_ordering(342) 00:11:50.859 fused_ordering(343) 00:11:50.859 fused_ordering(344) 00:11:50.859 fused_ordering(345) 00:11:50.859 fused_ordering(346) 00:11:50.859 fused_ordering(347) 00:11:50.859 fused_ordering(348) 00:11:50.859 fused_ordering(349) 00:11:50.859 fused_ordering(350) 00:11:50.859 fused_ordering(351) 00:11:50.859 fused_ordering(352) 00:11:50.859 fused_ordering(353) 00:11:50.859 fused_ordering(354) 00:11:50.859 fused_ordering(355) 00:11:50.859 fused_ordering(356) 00:11:50.859 fused_ordering(357) 00:11:50.859 fused_ordering(358) 00:11:50.859 fused_ordering(359) 00:11:50.859 fused_ordering(360) 00:11:50.859 fused_ordering(361) 00:11:50.859 fused_ordering(362) 00:11:50.859 fused_ordering(363) 00:11:50.859 fused_ordering(364) 00:11:50.859 fused_ordering(365) 00:11:50.859 fused_ordering(366) 00:11:50.859 fused_ordering(367) 00:11:50.859 fused_ordering(368) 00:11:50.859 fused_ordering(369) 00:11:50.859 fused_ordering(370) 00:11:50.859 fused_ordering(371) 00:11:50.859 fused_ordering(372) 00:11:50.859 fused_ordering(373) 00:11:50.859 fused_ordering(374) 00:11:50.859 fused_ordering(375) 00:11:50.859 fused_ordering(376) 00:11:50.859 fused_ordering(377) 00:11:50.859 fused_ordering(378) 00:11:50.859 fused_ordering(379) 00:11:50.859 fused_ordering(380) 00:11:50.860 fused_ordering(381) 00:11:50.860 fused_ordering(382) 00:11:50.860 fused_ordering(383) 00:11:50.860 fused_ordering(384) 00:11:50.860 fused_ordering(385) 00:11:50.860 fused_ordering(386) 00:11:50.860 fused_ordering(387) 00:11:50.860 fused_ordering(388) 00:11:50.860 fused_ordering(389) 00:11:50.860 fused_ordering(390) 00:11:50.860 fused_ordering(391) 00:11:50.860 fused_ordering(392) 00:11:50.860 fused_ordering(393) 00:11:50.860 fused_ordering(394) 00:11:50.860 fused_ordering(395) 00:11:50.860 fused_ordering(396) 00:11:50.860 fused_ordering(397) 00:11:50.860 fused_ordering(398) 00:11:50.860 fused_ordering(399) 00:11:50.860 fused_ordering(400) 00:11:50.860 fused_ordering(401) 00:11:50.860 fused_ordering(402) 00:11:50.860 fused_ordering(403) 00:11:50.860 fused_ordering(404) 00:11:50.860 fused_ordering(405) 00:11:50.860 fused_ordering(406) 00:11:50.860 fused_ordering(407) 00:11:50.860 fused_ordering(408) 00:11:50.860 fused_ordering(409) 00:11:50.860 fused_ordering(410) 00:11:51.118 fused_ordering(411) 00:11:51.118 fused_ordering(412) 00:11:51.118 fused_ordering(413) 00:11:51.118 fused_ordering(414) 00:11:51.118 fused_ordering(415) 00:11:51.118 fused_ordering(416) 00:11:51.118 fused_ordering(417) 00:11:51.118 fused_ordering(418) 00:11:51.118 fused_ordering(419) 00:11:51.118 fused_ordering(420) 00:11:51.118 fused_ordering(421) 00:11:51.118 fused_ordering(422) 00:11:51.118 fused_ordering(423) 00:11:51.118 fused_ordering(424) 00:11:51.118 fused_ordering(425) 00:11:51.118 fused_ordering(426) 00:11:51.118 fused_ordering(427) 00:11:51.118 fused_ordering(428) 00:11:51.118 fused_ordering(429) 00:11:51.118 fused_ordering(430) 00:11:51.118 fused_ordering(431) 00:11:51.118 fused_ordering(432) 00:11:51.118 fused_ordering(433) 00:11:51.118 fused_ordering(434) 00:11:51.118 fused_ordering(435) 00:11:51.118 fused_ordering(436) 00:11:51.118 fused_ordering(437) 00:11:51.118 fused_ordering(438) 00:11:51.118 fused_ordering(439) 00:11:51.118 fused_ordering(440) 00:11:51.118 fused_ordering(441) 00:11:51.118 fused_ordering(442) 00:11:51.118 fused_ordering(443) 00:11:51.118 fused_ordering(444) 00:11:51.118 fused_ordering(445) 00:11:51.118 fused_ordering(446) 00:11:51.118 fused_ordering(447) 00:11:51.118 fused_ordering(448) 00:11:51.118 fused_ordering(449) 00:11:51.118 fused_ordering(450) 00:11:51.118 fused_ordering(451) 00:11:51.118 fused_ordering(452) 00:11:51.118 fused_ordering(453) 00:11:51.118 fused_ordering(454) 00:11:51.118 fused_ordering(455) 00:11:51.118 fused_ordering(456) 00:11:51.118 fused_ordering(457) 00:11:51.118 fused_ordering(458) 00:11:51.118 fused_ordering(459) 00:11:51.118 fused_ordering(460) 00:11:51.118 fused_ordering(461) 00:11:51.118 fused_ordering(462) 00:11:51.118 fused_ordering(463) 00:11:51.118 fused_ordering(464) 00:11:51.118 fused_ordering(465) 00:11:51.118 fused_ordering(466) 00:11:51.118 fused_ordering(467) 00:11:51.118 fused_ordering(468) 00:11:51.118 fused_ordering(469) 00:11:51.118 fused_ordering(470) 00:11:51.118 fused_ordering(471) 00:11:51.118 fused_ordering(472) 00:11:51.118 fused_ordering(473) 00:11:51.118 fused_ordering(474) 00:11:51.118 fused_ordering(475) 00:11:51.118 fused_ordering(476) 00:11:51.118 fused_ordering(477) 00:11:51.118 fused_ordering(478) 00:11:51.118 fused_ordering(479) 00:11:51.118 fused_ordering(480) 00:11:51.118 fused_ordering(481) 00:11:51.118 fused_ordering(482) 00:11:51.118 fused_ordering(483) 00:11:51.118 fused_ordering(484) 00:11:51.118 fused_ordering(485) 00:11:51.118 fused_ordering(486) 00:11:51.118 fused_ordering(487) 00:11:51.118 fused_ordering(488) 00:11:51.118 fused_ordering(489) 00:11:51.118 fused_ordering(490) 00:11:51.118 fused_ordering(491) 00:11:51.118 fused_ordering(492) 00:11:51.118 fused_ordering(493) 00:11:51.118 fused_ordering(494) 00:11:51.118 fused_ordering(495) 00:11:51.118 fused_ordering(496) 00:11:51.118 fused_ordering(497) 00:11:51.118 fused_ordering(498) 00:11:51.118 fused_ordering(499) 00:11:51.118 fused_ordering(500) 00:11:51.118 fused_ordering(501) 00:11:51.118 fused_ordering(502) 00:11:51.118 fused_ordering(503) 00:11:51.118 fused_ordering(504) 00:11:51.118 fused_ordering(505) 00:11:51.118 fused_ordering(506) 00:11:51.118 fused_ordering(507) 00:11:51.118 fused_ordering(508) 00:11:51.118 fused_ordering(509) 00:11:51.118 fused_ordering(510) 00:11:51.118 fused_ordering(511) 00:11:51.118 fused_ordering(512) 00:11:51.118 fused_ordering(513) 00:11:51.118 fused_ordering(514) 00:11:51.118 fused_ordering(515) 00:11:51.118 fused_ordering(516) 00:11:51.118 fused_ordering(517) 00:11:51.118 fused_ordering(518) 00:11:51.118 fused_ordering(519) 00:11:51.118 fused_ordering(520) 00:11:51.118 fused_ordering(521) 00:11:51.118 fused_ordering(522) 00:11:51.118 fused_ordering(523) 00:11:51.118 fused_ordering(524) 00:11:51.118 fused_ordering(525) 00:11:51.118 fused_ordering(526) 00:11:51.118 fused_ordering(527) 00:11:51.118 fused_ordering(528) 00:11:51.118 fused_ordering(529) 00:11:51.118 fused_ordering(530) 00:11:51.118 fused_ordering(531) 00:11:51.118 fused_ordering(532) 00:11:51.118 fused_ordering(533) 00:11:51.118 fused_ordering(534) 00:11:51.118 fused_ordering(535) 00:11:51.118 fused_ordering(536) 00:11:51.118 fused_ordering(537) 00:11:51.118 fused_ordering(538) 00:11:51.118 fused_ordering(539) 00:11:51.118 fused_ordering(540) 00:11:51.118 fused_ordering(541) 00:11:51.118 fused_ordering(542) 00:11:51.118 fused_ordering(543) 00:11:51.118 fused_ordering(544) 00:11:51.118 fused_ordering(545) 00:11:51.118 fused_ordering(546) 00:11:51.118 fused_ordering(547) 00:11:51.118 fused_ordering(548) 00:11:51.118 fused_ordering(549) 00:11:51.118 fused_ordering(550) 00:11:51.118 fused_ordering(551) 00:11:51.118 fused_ordering(552) 00:11:51.118 fused_ordering(553) 00:11:51.118 fused_ordering(554) 00:11:51.118 fused_ordering(555) 00:11:51.118 fused_ordering(556) 00:11:51.118 fused_ordering(557) 00:11:51.118 fused_ordering(558) 00:11:51.118 fused_ordering(559) 00:11:51.118 fused_ordering(560) 00:11:51.118 fused_ordering(561) 00:11:51.118 fused_ordering(562) 00:11:51.118 fused_ordering(563) 00:11:51.118 fused_ordering(564) 00:11:51.118 fused_ordering(565) 00:11:51.118 fused_ordering(566) 00:11:51.118 fused_ordering(567) 00:11:51.118 fused_ordering(568) 00:11:51.118 fused_ordering(569) 00:11:51.118 fused_ordering(570) 00:11:51.118 fused_ordering(571) 00:11:51.118 fused_ordering(572) 00:11:51.118 fused_ordering(573) 00:11:51.118 fused_ordering(574) 00:11:51.118 fused_ordering(575) 00:11:51.118 fused_ordering(576) 00:11:51.118 fused_ordering(577) 00:11:51.118 fused_ordering(578) 00:11:51.118 fused_ordering(579) 00:11:51.118 fused_ordering(580) 00:11:51.118 fused_ordering(581) 00:11:51.118 fused_ordering(582) 00:11:51.118 fused_ordering(583) 00:11:51.118 fused_ordering(584) 00:11:51.118 fused_ordering(585) 00:11:51.118 fused_ordering(586) 00:11:51.118 fused_ordering(587) 00:11:51.119 fused_ordering(588) 00:11:51.119 fused_ordering(589) 00:11:51.119 fused_ordering(590) 00:11:51.119 fused_ordering(591) 00:11:51.119 fused_ordering(592) 00:11:51.119 fused_ordering(593) 00:11:51.119 fused_ordering(594) 00:11:51.119 fused_ordering(595) 00:11:51.119 fused_ordering(596) 00:11:51.119 fused_ordering(597) 00:11:51.119 fused_ordering(598) 00:11:51.119 fused_ordering(599) 00:11:51.119 fused_ordering(600) 00:11:51.119 fused_ordering(601) 00:11:51.119 fused_ordering(602) 00:11:51.119 fused_ordering(603) 00:11:51.119 fused_ordering(604) 00:11:51.119 fused_ordering(605) 00:11:51.119 fused_ordering(606) 00:11:51.119 fused_ordering(607) 00:11:51.119 fused_ordering(608) 00:11:51.119 fused_ordering(609) 00:11:51.119 fused_ordering(610) 00:11:51.119 fused_ordering(611) 00:11:51.119 fused_ordering(612) 00:11:51.119 fused_ordering(613) 00:11:51.119 fused_ordering(614) 00:11:51.119 fused_ordering(615) 00:11:51.682 fused_ordering(616) 00:11:51.683 fused_ordering(617) 00:11:51.683 fused_ordering(618) 00:11:51.683 fused_ordering(619) 00:11:51.683 fused_ordering(620) 00:11:51.683 fused_ordering(621) 00:11:51.683 fused_ordering(622) 00:11:51.683 fused_ordering(623) 00:11:51.683 fused_ordering(624) 00:11:51.683 fused_ordering(625) 00:11:51.683 fused_ordering(626) 00:11:51.683 fused_ordering(627) 00:11:51.683 fused_ordering(628) 00:11:51.683 fused_ordering(629) 00:11:51.683 fused_ordering(630) 00:11:51.683 fused_ordering(631) 00:11:51.683 fused_ordering(632) 00:11:51.683 fused_ordering(633) 00:11:51.683 fused_ordering(634) 00:11:51.683 fused_ordering(635) 00:11:51.683 fused_ordering(636) 00:11:51.683 fused_ordering(637) 00:11:51.683 fused_ordering(638) 00:11:51.683 fused_ordering(639) 00:11:51.683 fused_ordering(640) 00:11:51.683 fused_ordering(641) 00:11:51.683 fused_ordering(642) 00:11:51.683 fused_ordering(643) 00:11:51.683 fused_ordering(644) 00:11:51.683 fused_ordering(645) 00:11:51.683 fused_ordering(646) 00:11:51.683 fused_ordering(647) 00:11:51.683 fused_ordering(648) 00:11:51.683 fused_ordering(649) 00:11:51.683 fused_ordering(650) 00:11:51.683 fused_ordering(651) 00:11:51.683 fused_ordering(652) 00:11:51.683 fused_ordering(653) 00:11:51.683 fused_ordering(654) 00:11:51.683 fused_ordering(655) 00:11:51.683 fused_ordering(656) 00:11:51.683 fused_ordering(657) 00:11:51.683 fused_ordering(658) 00:11:51.683 fused_ordering(659) 00:11:51.683 fused_ordering(660) 00:11:51.683 fused_ordering(661) 00:11:51.683 fused_ordering(662) 00:11:51.683 fused_ordering(663) 00:11:51.683 fused_ordering(664) 00:11:51.683 fused_ordering(665) 00:11:51.683 fused_ordering(666) 00:11:51.683 fused_ordering(667) 00:11:51.683 fused_ordering(668) 00:11:51.683 fused_ordering(669) 00:11:51.683 fused_ordering(670) 00:11:51.683 fused_ordering(671) 00:11:51.683 fused_ordering(672) 00:11:51.683 fused_ordering(673) 00:11:51.683 fused_ordering(674) 00:11:51.683 fused_ordering(675) 00:11:51.683 fused_ordering(676) 00:11:51.683 fused_ordering(677) 00:11:51.683 fused_ordering(678) 00:11:51.683 fused_ordering(679) 00:11:51.683 fused_ordering(680) 00:11:51.683 fused_ordering(681) 00:11:51.683 fused_ordering(682) 00:11:51.683 fused_ordering(683) 00:11:51.683 fused_ordering(684) 00:11:51.683 fused_ordering(685) 00:11:51.683 fused_ordering(686) 00:11:51.683 fused_ordering(687) 00:11:51.683 fused_ordering(688) 00:11:51.683 fused_ordering(689) 00:11:51.683 fused_ordering(690) 00:11:51.683 fused_ordering(691) 00:11:51.683 fused_ordering(692) 00:11:51.683 fused_ordering(693) 00:11:51.683 fused_ordering(694) 00:11:51.683 fused_ordering(695) 00:11:51.683 fused_ordering(696) 00:11:51.683 fused_ordering(697) 00:11:51.683 fused_ordering(698) 00:11:51.683 fused_ordering(699) 00:11:51.683 fused_ordering(700) 00:11:51.683 fused_ordering(701) 00:11:51.683 fused_ordering(702) 00:11:51.683 fused_ordering(703) 00:11:51.683 fused_ordering(704) 00:11:51.683 fused_ordering(705) 00:11:51.683 fused_ordering(706) 00:11:51.683 fused_ordering(707) 00:11:51.683 fused_ordering(708) 00:11:51.683 fused_ordering(709) 00:11:51.683 fused_ordering(710) 00:11:51.683 fused_ordering(711) 00:11:51.683 fused_ordering(712) 00:11:51.683 fused_ordering(713) 00:11:51.683 fused_ordering(714) 00:11:51.683 fused_ordering(715) 00:11:51.683 fused_ordering(716) 00:11:51.683 fused_ordering(717) 00:11:51.683 fused_ordering(718) 00:11:51.683 fused_ordering(719) 00:11:51.683 fused_ordering(720) 00:11:51.683 fused_ordering(721) 00:11:51.683 fused_ordering(722) 00:11:51.683 fused_ordering(723) 00:11:51.683 fused_ordering(724) 00:11:51.683 fused_ordering(725) 00:11:51.683 fused_ordering(726) 00:11:51.683 fused_ordering(727) 00:11:51.683 fused_ordering(728) 00:11:51.683 fused_ordering(729) 00:11:51.683 fused_ordering(730) 00:11:51.683 fused_ordering(731) 00:11:51.683 fused_ordering(732) 00:11:51.683 fused_ordering(733) 00:11:51.683 fused_ordering(734) 00:11:51.683 fused_ordering(735) 00:11:51.683 fused_ordering(736) 00:11:51.683 fused_ordering(737) 00:11:51.683 fused_ordering(738) 00:11:51.683 fused_ordering(739) 00:11:51.683 fused_ordering(740) 00:11:51.683 fused_ordering(741) 00:11:51.683 fused_ordering(742) 00:11:51.683 fused_ordering(743) 00:11:51.683 fused_ordering(744) 00:11:51.683 fused_ordering(745) 00:11:51.683 fused_ordering(746) 00:11:51.683 fused_ordering(747) 00:11:51.683 fused_ordering(748) 00:11:51.683 fused_ordering(749) 00:11:51.683 fused_ordering(750) 00:11:51.683 fused_ordering(751) 00:11:51.683 fused_ordering(752) 00:11:51.683 fused_ordering(753) 00:11:51.683 fused_ordering(754) 00:11:51.683 fused_ordering(755) 00:11:51.683 fused_ordering(756) 00:11:51.683 fused_ordering(757) 00:11:51.683 fused_ordering(758) 00:11:51.683 fused_ordering(759) 00:11:51.683 fused_ordering(760) 00:11:51.683 fused_ordering(761) 00:11:51.683 fused_ordering(762) 00:11:51.683 fused_ordering(763) 00:11:51.683 fused_ordering(764) 00:11:51.683 fused_ordering(765) 00:11:51.683 fused_ordering(766) 00:11:51.683 fused_ordering(767) 00:11:51.683 fused_ordering(768) 00:11:51.683 fused_ordering(769) 00:11:51.683 fused_ordering(770) 00:11:51.683 fused_ordering(771) 00:11:51.683 fused_ordering(772) 00:11:51.683 fused_ordering(773) 00:11:51.683 fused_ordering(774) 00:11:51.683 fused_ordering(775) 00:11:51.683 fused_ordering(776) 00:11:51.683 fused_ordering(777) 00:11:51.683 fused_ordering(778) 00:11:51.683 fused_ordering(779) 00:11:51.683 fused_ordering(780) 00:11:51.683 fused_ordering(781) 00:11:51.683 fused_ordering(782) 00:11:51.683 fused_ordering(783) 00:11:51.683 fused_ordering(784) 00:11:51.683 fused_ordering(785) 00:11:51.683 fused_ordering(786) 00:11:51.683 fused_ordering(787) 00:11:51.683 fused_ordering(788) 00:11:51.683 fused_ordering(789) 00:11:51.683 fused_ordering(790) 00:11:51.683 fused_ordering(791) 00:11:51.683 fused_ordering(792) 00:11:51.683 fused_ordering(793) 00:11:51.683 fused_ordering(794) 00:11:51.683 fused_ordering(795) 00:11:51.683 fused_ordering(796) 00:11:51.683 fused_ordering(797) 00:11:51.683 fused_ordering(798) 00:11:51.683 fused_ordering(799) 00:11:51.683 fused_ordering(800) 00:11:51.683 fused_ordering(801) 00:11:51.683 fused_ordering(802) 00:11:51.683 fused_ordering(803) 00:11:51.683 fused_ordering(804) 00:11:51.683 fused_ordering(805) 00:11:51.683 fused_ordering(806) 00:11:51.683 fused_ordering(807) 00:11:51.683 fused_ordering(808) 00:11:51.683 fused_ordering(809) 00:11:51.683 fused_ordering(810) 00:11:51.683 fused_ordering(811) 00:11:51.683 fused_ordering(812) 00:11:51.683 fused_ordering(813) 00:11:51.683 fused_ordering(814) 00:11:51.683 fused_ordering(815) 00:11:51.683 fused_ordering(816) 00:11:51.683 fused_ordering(817) 00:11:51.683 fused_ordering(818) 00:11:51.683 fused_ordering(819) 00:11:51.683 fused_ordering(820) 00:11:52.249 fused_ordering(821) 00:11:52.249 fused_ordering(822) 00:11:52.249 fused_ordering(823) 00:11:52.249 fused_ordering(824) 00:11:52.249 fused_ordering(825) 00:11:52.249 fused_ordering(826) 00:11:52.249 fused_ordering(827) 00:11:52.249 fused_ordering(828) 00:11:52.249 fused_ordering(829) 00:11:52.249 fused_ordering(830) 00:11:52.249 fused_ordering(831) 00:11:52.249 fused_ordering(832) 00:11:52.249 fused_ordering(833) 00:11:52.249 fused_ordering(834) 00:11:52.249 fused_ordering(835) 00:11:52.249 fused_ordering(836) 00:11:52.249 fused_ordering(837) 00:11:52.249 fused_ordering(838) 00:11:52.249 fused_ordering(839) 00:11:52.249 fused_ordering(840) 00:11:52.249 fused_ordering(841) 00:11:52.249 fused_ordering(842) 00:11:52.249 fused_ordering(843) 00:11:52.249 fused_ordering(844) 00:11:52.249 fused_ordering(845) 00:11:52.249 fused_ordering(846) 00:11:52.249 fused_ordering(847) 00:11:52.249 fused_ordering(848) 00:11:52.249 fused_ordering(849) 00:11:52.249 fused_ordering(850) 00:11:52.249 fused_ordering(851) 00:11:52.249 fused_ordering(852) 00:11:52.249 fused_ordering(853) 00:11:52.249 fused_ordering(854) 00:11:52.249 fused_ordering(855) 00:11:52.249 fused_ordering(856) 00:11:52.249 fused_ordering(857) 00:11:52.249 fused_ordering(858) 00:11:52.249 fused_ordering(859) 00:11:52.249 fused_ordering(860) 00:11:52.249 fused_ordering(861) 00:11:52.249 fused_ordering(862) 00:11:52.249 fused_ordering(863) 00:11:52.249 fused_ordering(864) 00:11:52.249 fused_ordering(865) 00:11:52.249 fused_ordering(866) 00:11:52.249 fused_ordering(867) 00:11:52.249 fused_ordering(868) 00:11:52.249 fused_ordering(869) 00:11:52.249 fused_ordering(870) 00:11:52.249 fused_ordering(871) 00:11:52.249 fused_ordering(872) 00:11:52.249 fused_ordering(873) 00:11:52.249 fused_ordering(874) 00:11:52.249 fused_ordering(875) 00:11:52.249 fused_ordering(876) 00:11:52.249 fused_ordering(877) 00:11:52.249 fused_ordering(878) 00:11:52.249 fused_ordering(879) 00:11:52.249 fused_ordering(880) 00:11:52.249 fused_ordering(881) 00:11:52.249 fused_ordering(882) 00:11:52.249 fused_ordering(883) 00:11:52.249 fused_ordering(884) 00:11:52.249 fused_ordering(885) 00:11:52.249 fused_ordering(886) 00:11:52.249 fused_ordering(887) 00:11:52.249 fused_ordering(888) 00:11:52.249 fused_ordering(889) 00:11:52.249 fused_ordering(890) 00:11:52.249 fused_ordering(891) 00:11:52.249 fused_ordering(892) 00:11:52.249 fused_ordering(893) 00:11:52.249 fused_ordering(894) 00:11:52.249 fused_ordering(895) 00:11:52.249 fused_ordering(896) 00:11:52.249 fused_ordering(897) 00:11:52.249 fused_ordering(898) 00:11:52.249 fused_ordering(899) 00:11:52.249 fused_ordering(900) 00:11:52.249 fused_ordering(901) 00:11:52.249 fused_ordering(902) 00:11:52.249 fused_ordering(903) 00:11:52.249 fused_ordering(904) 00:11:52.249 fused_ordering(905) 00:11:52.249 fused_ordering(906) 00:11:52.249 fused_ordering(907) 00:11:52.249 fused_ordering(908) 00:11:52.249 fused_ordering(909) 00:11:52.249 fused_ordering(910) 00:11:52.249 fused_ordering(911) 00:11:52.249 fused_ordering(912) 00:11:52.249 fused_ordering(913) 00:11:52.249 fused_ordering(914) 00:11:52.249 fused_ordering(915) 00:11:52.249 fused_ordering(916) 00:11:52.249 fused_ordering(917) 00:11:52.249 fused_ordering(918) 00:11:52.249 fused_ordering(919) 00:11:52.249 fused_ordering(920) 00:11:52.249 fused_ordering(921) 00:11:52.249 fused_ordering(922) 00:11:52.249 fused_ordering(923) 00:11:52.249 fused_ordering(924) 00:11:52.250 fused_ordering(925) 00:11:52.250 fused_ordering(926) 00:11:52.250 fused_ordering(927) 00:11:52.250 fused_ordering(928) 00:11:52.250 fused_ordering(929) 00:11:52.250 fused_ordering(930) 00:11:52.250 fused_ordering(931) 00:11:52.250 fused_ordering(932) 00:11:52.250 fused_ordering(933) 00:11:52.250 fused_ordering(934) 00:11:52.250 fused_ordering(935) 00:11:52.250 fused_ordering(936) 00:11:52.250 fused_ordering(937) 00:11:52.250 fused_ordering(938) 00:11:52.250 fused_ordering(939) 00:11:52.250 fused_ordering(940) 00:11:52.250 fused_ordering(941) 00:11:52.250 fused_ordering(942) 00:11:52.250 fused_ordering(943) 00:11:52.250 fused_ordering(944) 00:11:52.250 fused_ordering(945) 00:11:52.250 fused_ordering(946) 00:11:52.250 fused_ordering(947) 00:11:52.250 fused_ordering(948) 00:11:52.250 fused_ordering(949) 00:11:52.250 fused_ordering(950) 00:11:52.250 fused_ordering(951) 00:11:52.250 fused_ordering(952) 00:11:52.250 fused_ordering(953) 00:11:52.250 fused_ordering(954) 00:11:52.250 fused_ordering(955) 00:11:52.250 fused_ordering(956) 00:11:52.250 fused_ordering(957) 00:11:52.250 fused_ordering(958) 00:11:52.250 fused_ordering(959) 00:11:52.250 fused_ordering(960) 00:11:52.250 fused_ordering(961) 00:11:52.250 fused_ordering(962) 00:11:52.250 fused_ordering(963) 00:11:52.250 fused_ordering(964) 00:11:52.250 fused_ordering(965) 00:11:52.250 fused_ordering(966) 00:11:52.250 fused_ordering(967) 00:11:52.250 fused_ordering(968) 00:11:52.250 fused_ordering(969) 00:11:52.250 fused_ordering(970) 00:11:52.250 fused_ordering(971) 00:11:52.250 fused_ordering(972) 00:11:52.250 fused_ordering(973) 00:11:52.250 fused_ordering(974) 00:11:52.250 fused_ordering(975) 00:11:52.250 fused_ordering(976) 00:11:52.250 fused_ordering(977) 00:11:52.250 fused_ordering(978) 00:11:52.250 fused_ordering(979) 00:11:52.250 fused_ordering(980) 00:11:52.250 fused_ordering(981) 00:11:52.250 fused_ordering(982) 00:11:52.250 fused_ordering(983) 00:11:52.250 fused_ordering(984) 00:11:52.250 fused_ordering(985) 00:11:52.250 fused_ordering(986) 00:11:52.250 fused_ordering(987) 00:11:52.250 fused_ordering(988) 00:11:52.250 fused_ordering(989) 00:11:52.250 fused_ordering(990) 00:11:52.250 fused_ordering(991) 00:11:52.250 fused_ordering(992) 00:11:52.250 fused_ordering(993) 00:11:52.250 fused_ordering(994) 00:11:52.250 fused_ordering(995) 00:11:52.250 fused_ordering(996) 00:11:52.250 fused_ordering(997) 00:11:52.250 fused_ordering(998) 00:11:52.250 fused_ordering(999) 00:11:52.250 fused_ordering(1000) 00:11:52.250 fused_ordering(1001) 00:11:52.250 fused_ordering(1002) 00:11:52.250 fused_ordering(1003) 00:11:52.250 fused_ordering(1004) 00:11:52.250 fused_ordering(1005) 00:11:52.250 fused_ordering(1006) 00:11:52.250 fused_ordering(1007) 00:11:52.250 fused_ordering(1008) 00:11:52.250 fused_ordering(1009) 00:11:52.250 fused_ordering(1010) 00:11:52.250 fused_ordering(1011) 00:11:52.250 fused_ordering(1012) 00:11:52.250 fused_ordering(1013) 00:11:52.250 fused_ordering(1014) 00:11:52.250 fused_ordering(1015) 00:11:52.250 fused_ordering(1016) 00:11:52.250 fused_ordering(1017) 00:11:52.250 fused_ordering(1018) 00:11:52.250 fused_ordering(1019) 00:11:52.250 fused_ordering(1020) 00:11:52.250 fused_ordering(1021) 00:11:52.250 fused_ordering(1022) 00:11:52.250 fused_ordering(1023) 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:52.250 rmmod nvme_tcp 00:11:52.250 rmmod nvme_fabrics 00:11:52.250 rmmod nvme_keyring 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 196235 ']' 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 196235 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 196235 ']' 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 196235 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 196235 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 196235' 00:11:52.250 killing process with pid 196235 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 196235 00:11:52.250 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 196235 00:11:52.510 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:52.510 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:52.510 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:52.510 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:11:52.510 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:11:52.510 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:52.510 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:11:52.510 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:52.510 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:52.511 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.511 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.511 04:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.418 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:54.418 00:11:54.418 real 0m7.362s 00:11:54.418 user 0m4.854s 00:11:54.418 sys 0m2.789s 00:11:54.418 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:54.418 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:54.418 ************************************ 00:11:54.418 END TEST nvmf_fused_ordering 00:11:54.418 ************************************ 00:11:54.418 04:02:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:54.418 04:02:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:54.418 04:02:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:54.418 04:02:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:54.418 ************************************ 00:11:54.418 START TEST nvmf_ns_masking 00:11:54.418 ************************************ 00:11:54.418 04:02:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:54.678 * Looking for test storage... 00:11:54.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:54.678 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:54.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.678 --rc genhtml_branch_coverage=1 00:11:54.678 --rc genhtml_function_coverage=1 00:11:54.678 --rc genhtml_legend=1 00:11:54.678 --rc geninfo_all_blocks=1 00:11:54.678 --rc geninfo_unexecuted_blocks=1 00:11:54.678 00:11:54.678 ' 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:54.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.679 --rc genhtml_branch_coverage=1 00:11:54.679 --rc genhtml_function_coverage=1 00:11:54.679 --rc genhtml_legend=1 00:11:54.679 --rc geninfo_all_blocks=1 00:11:54.679 --rc geninfo_unexecuted_blocks=1 00:11:54.679 00:11:54.679 ' 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:54.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.679 --rc genhtml_branch_coverage=1 00:11:54.679 --rc genhtml_function_coverage=1 00:11:54.679 --rc genhtml_legend=1 00:11:54.679 --rc geninfo_all_blocks=1 00:11:54.679 --rc geninfo_unexecuted_blocks=1 00:11:54.679 00:11:54.679 ' 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:54.679 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:54.679 --rc genhtml_branch_coverage=1 00:11:54.679 --rc genhtml_function_coverage=1 00:11:54.679 --rc genhtml_legend=1 00:11:54.679 --rc geninfo_all_blocks=1 00:11:54.679 --rc geninfo_unexecuted_blocks=1 00:11:54.679 00:11:54.679 ' 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:54.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=97367abf-d3ae-4d4b-a0dc-ed6f6b57c97c 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=39eb07b0-72b5-4607-bbc7-0ec9610a4911 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=9d1a632a-deb6-4410-b8f1-1f27a6f4c3d9 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:11:54.679 04:02:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:57.218 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:57.218 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:57.218 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:57.218 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:57.218 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:57.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:11:57.219 00:11:57.219 --- 10.0.0.2 ping statistics --- 00:11:57.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.219 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:57.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:11:57.219 00:11:57.219 --- 10.0.0.1 ping statistics --- 00:11:57.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.219 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=198577 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 198577 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 198577 ']' 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:57.219 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:57.219 [2024-12-09 04:02:25.597903] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:11:57.219 [2024-12-09 04:02:25.597993] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.219 [2024-12-09 04:02:25.671489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:57.219 [2024-12-09 04:02:25.729424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.219 [2024-12-09 04:02:25.729496] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.219 [2024-12-09 04:02:25.729526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:57.219 [2024-12-09 04:02:25.729537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:57.219 [2024-12-09 04:02:25.729547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.219 [2024-12-09 04:02:25.730216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:57.477 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:57.477 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:11:57.477 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:57.477 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:57.477 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:57.477 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:57.477 04:02:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:57.735 [2024-12-09 04:02:26.128138] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:57.735 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:57.735 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:57.735 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:57.993 Malloc1 00:11:57.993 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:58.251 Malloc2 00:11:58.251 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:58.508 04:02:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:58.767 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.024 [2024-12-09 04:02:27.515788] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.024 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:59.024 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9d1a632a-deb6-4410-b8f1-1f27a6f4c3d9 -a 10.0.0.2 -s 4420 -i 4 00:11:59.280 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:59.280 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:11:59.280 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.280 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:59.280 04:02:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:01.178 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:01.178 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:01.178 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.178 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:01.178 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.178 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:01.178 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:01.178 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:01.435 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:01.435 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:01.435 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:01.435 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:01.435 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:01.435 [ 0]:0x1 00:12:01.435 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:01.435 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:01.435 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=12a5eea3d770425b91715f1a6e30d712 00:12:01.435 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 12a5eea3d770425b91715f1a6e30d712 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:01.435 04:02:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:01.693 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:01.693 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:01.693 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:01.693 [ 0]:0x1 00:12:01.693 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:01.693 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:01.693 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=12a5eea3d770425b91715f1a6e30d712 00:12:01.693 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 12a5eea3d770425b91715f1a6e30d712 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:01.693 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:01.693 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:01.693 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:01.693 [ 1]:0x2 00:12:01.693 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:01.693 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:01.952 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89090dffd8074f1cb34471b04ecc30a7 00:12:01.952 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89090dffd8074f1cb34471b04ecc30a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:01.952 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:01.952 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:01.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.952 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:02.210 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:02.468 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:02.468 04:02:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9d1a632a-deb6-4410-b8f1-1f27a6f4c3d9 -a 10.0.0.2 -s 4420 -i 4 00:12:02.732 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:02.732 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:02.732 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:02.732 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:02.732 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:02.732 04:02:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:04.633 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:04.633 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:04.633 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.633 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:04.633 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.633 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:04.633 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:04.633 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:04.633 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:04.633 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:04.633 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:04.633 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:04.633 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:04.633 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:04.633 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.633 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:04.633 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:04.633 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:04.633 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:04.633 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:04.892 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:04.892 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:04.892 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:04.892 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.892 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:04.892 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:04.892 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:04.892 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:04.892 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:04.892 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:04.892 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:04.892 [ 0]:0x2 00:12:04.892 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:04.892 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:04.892 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89090dffd8074f1cb34471b04ecc30a7 00:12:04.892 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89090dffd8074f1cb34471b04ecc30a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.892 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:05.151 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:05.151 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:05.151 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:05.151 [ 0]:0x1 00:12:05.151 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:05.151 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:05.409 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=12a5eea3d770425b91715f1a6e30d712 00:12:05.410 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 12a5eea3d770425b91715f1a6e30d712 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:05.410 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:05.410 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:05.410 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:05.410 [ 1]:0x2 00:12:05.410 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:05.410 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:05.410 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89090dffd8074f1cb34471b04ecc30a7 00:12:05.410 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89090dffd8074f1cb34471b04ecc30a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:05.410 04:02:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:05.668 [ 0]:0x2 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89090dffd8074f1cb34471b04ecc30a7 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89090dffd8074f1cb34471b04ecc30a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:05.668 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.668 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:05.926 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:05.926 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 9d1a632a-deb6-4410-b8f1-1f27a6f4c3d9 -a 10.0.0.2 -s 4420 -i 4 00:12:06.185 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:06.185 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:06.185 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:06.185 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:06.185 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:06.185 04:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:08.090 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:08.090 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:08.090 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:08.090 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:08.090 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:08.090 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:08.090 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:08.090 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:08.347 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:08.347 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:08.347 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:08.347 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:08.347 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:08.347 [ 0]:0x1 00:12:08.347 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:08.347 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:08.347 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=12a5eea3d770425b91715f1a6e30d712 00:12:08.347 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 12a5eea3d770425b91715f1a6e30d712 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:08.347 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:08.347 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:08.347 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:08.347 [ 1]:0x2 00:12:08.347 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:08.347 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:08.604 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89090dffd8074f1cb34471b04ecc30a7 00:12:08.604 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89090dffd8074f1cb34471b04ecc30a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:08.604 04:02:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:08.862 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:08.862 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:08.862 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:08.862 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:08.863 [ 0]:0x2 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89090dffd8074f1cb34471b04ecc30a7 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89090dffd8074f1cb34471b04ecc30a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:08.863 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:09.120 [2024-12-09 04:02:37.597882] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:09.120 request: 00:12:09.120 { 00:12:09.120 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.120 "nsid": 2, 00:12:09.120 "host": "nqn.2016-06.io.spdk:host1", 00:12:09.120 "method": "nvmf_ns_remove_host", 00:12:09.120 "req_id": 1 00:12:09.120 } 00:12:09.120 Got JSON-RPC error response 00:12:09.120 response: 00:12:09.120 { 00:12:09.120 "code": -32602, 00:12:09.120 "message": "Invalid parameters" 00:12:09.120 } 00:12:09.120 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:09.120 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:09.120 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:09.120 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:09.120 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:09.120 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:09.120 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:09.120 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:09.120 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:09.120 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:09.120 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:09.120 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:09.120 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:09.120 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:09.120 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:09.120 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:09.121 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:09.121 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.121 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:09.121 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:09.121 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:09.121 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:09.121 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:09.121 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:09.121 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:09.121 [ 0]:0x2 00:12:09.121 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:09.121 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:09.378 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=89090dffd8074f1cb34471b04ecc30a7 00:12:09.378 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 89090dffd8074f1cb34471b04ecc30a7 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:09.378 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:09.378 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.378 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.378 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=200183 00:12:09.378 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:09.378 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.378 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 200183 /var/tmp/host.sock 00:12:09.378 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 200183 ']' 00:12:09.378 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:09.378 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.378 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:09.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:09.378 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.378 04:02:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:09.378 [2024-12-09 04:02:37.812829] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:12:09.378 [2024-12-09 04:02:37.812908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid200183 ] 00:12:09.378 [2024-12-09 04:02:37.878354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.378 [2024-12-09 04:02:37.935405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.637 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.637 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:09.637 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.203 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:10.203 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 97367abf-d3ae-4d4b-a0dc-ed6f6b57c97c 00:12:10.203 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:10.203 04:02:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 97367ABFD3AE4D4BA0DCED6F6B57C97C -i 00:12:10.768 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 39eb07b0-72b5-4607-bbc7-0ec9610a4911 00:12:10.768 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:10.768 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 39EB07B072B54607BBC70EC9610A4911 -i 00:12:10.768 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:11.334 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:11.334 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:11.334 04:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:11.900 nvme0n1 00:12:11.900 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:11.901 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:12.159 nvme1n2 00:12:12.159 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:12.159 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:12.159 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:12.159 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:12.159 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:12.417 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:12.675 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:12.675 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:12.675 04:02:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:12.933 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 97367abf-d3ae-4d4b-a0dc-ed6f6b57c97c == \9\7\3\6\7\a\b\f\-\d\3\a\e\-\4\d\4\b\-\a\0\d\c\-\e\d\6\f\6\b\5\7\c\9\7\c ]] 00:12:12.933 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:12.933 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:12.933 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:13.192 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 39eb07b0-72b5-4607-bbc7-0ec9610a4911 == \3\9\e\b\0\7\b\0\-\7\2\b\5\-\4\6\0\7\-\b\b\c\7\-\0\e\c\9\6\1\0\a\4\9\1\1 ]] 00:12:13.192 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.450 04:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:13.708 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 97367abf-d3ae-4d4b-a0dc-ed6f6b57c97c 00:12:13.708 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:13.708 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 97367ABFD3AE4D4BA0DCED6F6B57C97C 00:12:13.708 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:13.708 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 97367ABFD3AE4D4BA0DCED6F6B57C97C 00:12:13.708 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.708 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.708 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.708 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.708 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.708 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.708 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:13.708 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:13.708 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 97367ABFD3AE4D4BA0DCED6F6B57C97C 00:12:13.966 [2024-12-09 04:02:42.355791] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:13.966 [2024-12-09 04:02:42.355833] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:13.966 [2024-12-09 04:02:42.355863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:13.966 request: 00:12:13.966 { 00:12:13.966 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:13.966 "namespace": { 00:12:13.966 "bdev_name": "invalid", 00:12:13.966 "nsid": 1, 00:12:13.966 "nguid": "97367ABFD3AE4D4BA0DCED6F6B57C97C", 00:12:13.966 "no_auto_visible": false, 00:12:13.966 "hide_metadata": false 00:12:13.966 }, 00:12:13.966 "method": "nvmf_subsystem_add_ns", 00:12:13.966 "req_id": 1 00:12:13.966 } 00:12:13.966 Got JSON-RPC error response 00:12:13.966 response: 00:12:13.966 { 00:12:13.966 "code": -32602, 00:12:13.966 "message": "Invalid parameters" 00:12:13.966 } 00:12:13.966 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:13.966 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:13.966 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:13.966 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:13.966 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 97367abf-d3ae-4d4b-a0dc-ed6f6b57c97c 00:12:13.966 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:12:13.966 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 97367ABFD3AE4D4BA0DCED6F6B57C97C -i 00:12:14.224 04:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:16.127 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:16.127 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:16.127 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:16.385 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:16.385 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 200183 00:12:16.385 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 200183 ']' 00:12:16.385 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 200183 00:12:16.385 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:16.385 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.385 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 200183 00:12:16.646 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:16.646 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:16.646 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 200183' 00:12:16.646 killing process with pid 200183 00:12:16.646 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 200183 00:12:16.646 04:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 200183 00:12:16.905 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.162 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:17.162 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:17.162 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:17.162 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:12:17.162 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:17.162 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:12:17.162 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:17.162 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:17.162 rmmod nvme_tcp 00:12:17.162 rmmod nvme_fabrics 00:12:17.162 rmmod nvme_keyring 00:12:17.420 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:17.420 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:12:17.420 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:12:17.420 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 198577 ']' 00:12:17.420 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 198577 00:12:17.420 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 198577 ']' 00:12:17.420 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 198577 00:12:17.420 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:17.420 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.420 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 198577 00:12:17.420 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.420 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.420 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 198577' 00:12:17.420 killing process with pid 198577 00:12:17.420 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 198577 00:12:17.420 04:02:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 198577 00:12:17.680 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:17.680 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:17.680 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:17.680 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:12:17.680 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:12:17.680 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:17.680 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:12:17.680 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:17.680 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:17.680 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.680 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.680 04:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.586 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:19.586 00:12:19.586 real 0m25.120s 00:12:19.586 user 0m36.274s 00:12:19.586 sys 0m4.824s 00:12:19.586 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.586 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:19.586 ************************************ 00:12:19.586 END TEST nvmf_ns_masking 00:12:19.586 ************************************ 00:12:19.586 04:02:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:19.586 04:02:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:19.586 04:02:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:19.586 04:02:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.586 04:02:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:19.845 ************************************ 00:12:19.845 START TEST nvmf_nvme_cli 00:12:19.845 ************************************ 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:19.845 * Looking for test storage... 00:12:19.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:19.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.845 --rc genhtml_branch_coverage=1 00:12:19.845 --rc genhtml_function_coverage=1 00:12:19.845 --rc genhtml_legend=1 00:12:19.845 --rc geninfo_all_blocks=1 00:12:19.845 --rc geninfo_unexecuted_blocks=1 00:12:19.845 00:12:19.845 ' 00:12:19.845 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:19.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.845 --rc genhtml_branch_coverage=1 00:12:19.845 --rc genhtml_function_coverage=1 00:12:19.845 --rc genhtml_legend=1 00:12:19.845 --rc geninfo_all_blocks=1 00:12:19.846 --rc geninfo_unexecuted_blocks=1 00:12:19.846 00:12:19.846 ' 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:19.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.846 --rc genhtml_branch_coverage=1 00:12:19.846 --rc genhtml_function_coverage=1 00:12:19.846 --rc genhtml_legend=1 00:12:19.846 --rc geninfo_all_blocks=1 00:12:19.846 --rc geninfo_unexecuted_blocks=1 00:12:19.846 00:12:19.846 ' 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:19.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.846 --rc genhtml_branch_coverage=1 00:12:19.846 --rc genhtml_function_coverage=1 00:12:19.846 --rc genhtml_legend=1 00:12:19.846 --rc geninfo_all_blocks=1 00:12:19.846 --rc geninfo_unexecuted_blocks=1 00:12:19.846 00:12:19.846 ' 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:19.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:12:19.846 04:02:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:22.382 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:22.383 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:22.383 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:22.383 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:22.383 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:22.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:12:22.383 00:12:22.383 --- 10.0.0.2 ping statistics --- 00:12:22.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.383 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:22.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:12:22.383 00:12:22.383 --- 10.0.0.1 ping statistics --- 00:12:22.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.383 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=203110 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 203110 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 203110 ']' 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:22.383 04:02:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.383 [2024-12-09 04:02:50.764901] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:12:22.384 [2024-12-09 04:02:50.764977] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.384 [2024-12-09 04:02:50.835050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.384 [2024-12-09 04:02:50.890089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.384 [2024-12-09 04:02:50.890148] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.384 [2024-12-09 04:02:50.890176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:22.384 [2024-12-09 04:02:50.890186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:22.384 [2024-12-09 04:02:50.890195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.384 [2024-12-09 04:02:50.891863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.384 [2024-12-09 04:02:50.891945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.384 [2024-12-09 04:02:50.892053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.384 [2024-12-09 04:02:50.892056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.642 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.642 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:12:22.642 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:22.642 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:22.642 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.642 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:22.642 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:22.642 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.642 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.642 [2024-12-09 04:02:51.032969] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:22.642 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.642 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:22.642 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.642 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.642 Malloc0 00:12:22.642 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.642 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:22.642 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.642 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.642 Malloc1 00:12:22.642 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.642 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:22.642 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.643 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.643 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.643 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:22.643 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.643 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.643 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.643 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:22.643 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.643 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.643 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.643 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.643 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.643 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.643 [2024-12-09 04:02:51.132716] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.643 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.643 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:22.643 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.643 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:22.643 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.643 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:22.901 00:12:22.901 Discovery Log Number of Records 2, Generation counter 2 00:12:22.901 =====Discovery Log Entry 0====== 00:12:22.901 trtype: tcp 00:12:22.901 adrfam: ipv4 00:12:22.901 subtype: current discovery subsystem 00:12:22.901 treq: not required 00:12:22.901 portid: 0 00:12:22.901 trsvcid: 4420 00:12:22.901 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:22.901 traddr: 10.0.0.2 00:12:22.901 eflags: explicit discovery connections, duplicate discovery information 00:12:22.901 sectype: none 00:12:22.901 =====Discovery Log Entry 1====== 00:12:22.901 trtype: tcp 00:12:22.901 adrfam: ipv4 00:12:22.901 subtype: nvme subsystem 00:12:22.901 treq: not required 00:12:22.901 portid: 0 00:12:22.901 trsvcid: 4420 00:12:22.901 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:22.901 traddr: 10.0.0.2 00:12:22.901 eflags: none 00:12:22.901 sectype: none 00:12:22.901 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:22.901 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:22.901 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:22.901 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:22.901 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:22.901 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:22.901 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:22.901 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:22.901 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:22.901 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:22.901 04:02:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.469 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:23.469 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:12:23.469 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.469 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:23.469 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:23.469 04:02:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:12:25.999 /dev/nvme0n2 ]] 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:25.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:25.999 rmmod nvme_tcp 00:12:25.999 rmmod nvme_fabrics 00:12:25.999 rmmod nvme_keyring 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 203110 ']' 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 203110 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 203110 ']' 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 203110 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 203110 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 203110' 00:12:25.999 killing process with pid 203110 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 203110 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 203110 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.999 04:02:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:28.543 00:12:28.543 real 0m8.447s 00:12:28.543 user 0m15.206s 00:12:28.543 sys 0m2.427s 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:28.543 ************************************ 00:12:28.543 END TEST nvmf_nvme_cli 00:12:28.543 ************************************ 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:28.543 ************************************ 00:12:28.543 START TEST nvmf_vfio_user 00:12:28.543 ************************************ 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:28.543 * Looking for test storage... 00:12:28.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:12:28.543 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:28.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.544 --rc genhtml_branch_coverage=1 00:12:28.544 --rc genhtml_function_coverage=1 00:12:28.544 --rc genhtml_legend=1 00:12:28.544 --rc geninfo_all_blocks=1 00:12:28.544 --rc geninfo_unexecuted_blocks=1 00:12:28.544 00:12:28.544 ' 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:28.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.544 --rc genhtml_branch_coverage=1 00:12:28.544 --rc genhtml_function_coverage=1 00:12:28.544 --rc genhtml_legend=1 00:12:28.544 --rc geninfo_all_blocks=1 00:12:28.544 --rc geninfo_unexecuted_blocks=1 00:12:28.544 00:12:28.544 ' 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:28.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.544 --rc genhtml_branch_coverage=1 00:12:28.544 --rc genhtml_function_coverage=1 00:12:28.544 --rc genhtml_legend=1 00:12:28.544 --rc geninfo_all_blocks=1 00:12:28.544 --rc geninfo_unexecuted_blocks=1 00:12:28.544 00:12:28.544 ' 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:28.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.544 --rc genhtml_branch_coverage=1 00:12:28.544 --rc genhtml_function_coverage=1 00:12:28.544 --rc genhtml_legend=1 00:12:28.544 --rc geninfo_all_blocks=1 00:12:28.544 --rc geninfo_unexecuted_blocks=1 00:12:28.544 00:12:28.544 ' 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:28.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:28.544 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:28.545 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:28.545 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:28.545 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:28.545 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:28.545 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:28.545 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:28.545 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:28.545 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:28.545 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=203925 00:12:28.545 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:28.545 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 203925' 00:12:28.545 Process pid: 203925 00:12:28.545 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:28.545 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 203925 00:12:28.545 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 203925 ']' 00:12:28.545 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.545 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.545 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.545 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.545 04:02:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:28.545 [2024-12-09 04:02:56.872529] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:12:28.545 [2024-12-09 04:02:56.872638] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.545 [2024-12-09 04:02:56.940581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.545 [2024-12-09 04:02:57.002640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.545 [2024-12-09 04:02:57.002695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.545 [2024-12-09 04:02:57.002709] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.545 [2024-12-09 04:02:57.002720] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.545 [2024-12-09 04:02:57.002730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.545 [2024-12-09 04:02:57.004343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.545 [2024-12-09 04:02:57.004403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.545 [2024-12-09 04:02:57.004463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.545 [2024-12-09 04:02:57.004467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.803 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:28.803 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:12:28.803 04:02:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:29.733 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:29.989 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:29.989 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:29.989 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:29.989 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:29.989 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:30.247 Malloc1 00:12:30.247 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:30.504 04:02:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:30.761 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:31.017 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:31.017 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:31.017 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:31.275 Malloc2 00:12:31.275 04:02:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:31.534 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:32.100 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:32.100 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:32.100 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:32.100 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:32.100 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:32.100 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:32.100 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:32.361 [2024-12-09 04:03:00.680428] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:12:32.361 [2024-12-09 04:03:00.680471] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid204512 ] 00:12:32.361 [2024-12-09 04:03:00.730762] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:32.361 [2024-12-09 04:03:00.739761] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:32.361 [2024-12-09 04:03:00.739794] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f39a97ae000 00:12:32.361 [2024-12-09 04:03:00.740751] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.361 [2024-12-09 04:03:00.741750] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.361 [2024-12-09 04:03:00.742756] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.361 [2024-12-09 04:03:00.743762] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:32.361 [2024-12-09 04:03:00.744762] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:32.361 [2024-12-09 04:03:00.745765] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.361 [2024-12-09 04:03:00.746771] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:32.361 [2024-12-09 04:03:00.747777] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:32.361 [2024-12-09 04:03:00.748781] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:32.361 [2024-12-09 04:03:00.748802] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f39a97a3000 00:12:32.361 [2024-12-09 04:03:00.749921] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:32.361 [2024-12-09 04:03:00.764962] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:32.361 [2024-12-09 04:03:00.765006] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:12:32.361 [2024-12-09 04:03:00.769911] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:32.361 [2024-12-09 04:03:00.769965] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:32.361 [2024-12-09 04:03:00.770055] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:12:32.361 [2024-12-09 04:03:00.770081] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:12:32.361 [2024-12-09 04:03:00.770093] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:12:32.361 [2024-12-09 04:03:00.770906] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:32.361 [2024-12-09 04:03:00.770930] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:12:32.361 [2024-12-09 04:03:00.770945] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:12:32.361 [2024-12-09 04:03:00.771907] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:32.361 [2024-12-09 04:03:00.771925] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:12:32.361 [2024-12-09 04:03:00.771938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:32.361 [2024-12-09 04:03:00.772914] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:32.361 [2024-12-09 04:03:00.772932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:32.361 [2024-12-09 04:03:00.773918] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:32.361 [2024-12-09 04:03:00.773937] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:32.361 [2024-12-09 04:03:00.773946] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:32.361 [2024-12-09 04:03:00.773957] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:32.361 [2024-12-09 04:03:00.774067] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:12:32.361 [2024-12-09 04:03:00.774075] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:32.361 [2024-12-09 04:03:00.774083] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:32.361 [2024-12-09 04:03:00.775297] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:32.361 [2024-12-09 04:03:00.775926] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:32.361 [2024-12-09 04:03:00.776934] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:32.361 [2024-12-09 04:03:00.777929] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:32.361 [2024-12-09 04:03:00.778040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:32.361 [2024-12-09 04:03:00.778946] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:32.362 [2024-12-09 04:03:00.778964] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:32.362 [2024-12-09 04:03:00.778973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.778996] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:12:32.362 [2024-12-09 04:03:00.779010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.779039] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:32.362 [2024-12-09 04:03:00.779049] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.362 [2024-12-09 04:03:00.779055] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:32.362 [2024-12-09 04:03:00.779071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.362 [2024-12-09 04:03:00.779136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:32.362 [2024-12-09 04:03:00.779155] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:12:32.362 [2024-12-09 04:03:00.779165] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:12:32.362 [2024-12-09 04:03:00.779172] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:12:32.362 [2024-12-09 04:03:00.779183] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:32.362 [2024-12-09 04:03:00.779191] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:12:32.362 [2024-12-09 04:03:00.779199] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:12:32.362 [2024-12-09 04:03:00.779206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.779218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.779232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:32.362 [2024-12-09 04:03:00.779247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:32.362 [2024-12-09 04:03:00.779291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.362 [2024-12-09 04:03:00.779305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.362 [2024-12-09 04:03:00.779317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.362 [2024-12-09 04:03:00.779329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:32.362 [2024-12-09 04:03:00.779338] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.779354] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.779370] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:32.362 [2024-12-09 04:03:00.779382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:32.362 [2024-12-09 04:03:00.779393] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:12:32.362 [2024-12-09 04:03:00.779401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.779413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.779423] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.779436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:32.362 [2024-12-09 04:03:00.779455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:32.362 [2024-12-09 04:03:00.779524] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.779541] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.779570] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:32.362 [2024-12-09 04:03:00.779580] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:32.362 [2024-12-09 04:03:00.779591] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:32.362 [2024-12-09 04:03:00.779601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:32.362 [2024-12-09 04:03:00.779638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:32.362 [2024-12-09 04:03:00.779655] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:12:32.362 [2024-12-09 04:03:00.779676] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.779691] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.779703] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:32.362 [2024-12-09 04:03:00.779711] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.362 [2024-12-09 04:03:00.779717] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:32.362 [2024-12-09 04:03:00.779726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.362 [2024-12-09 04:03:00.779759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:32.362 [2024-12-09 04:03:00.779780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.779795] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.779807] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:32.362 [2024-12-09 04:03:00.779815] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.362 [2024-12-09 04:03:00.779821] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:32.362 [2024-12-09 04:03:00.779830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.362 [2024-12-09 04:03:00.779846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:32.362 [2024-12-09 04:03:00.779859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.779871] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.779884] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.779898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.779907] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.779915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.779923] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:32.362 [2024-12-09 04:03:00.779934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:12:32.362 [2024-12-09 04:03:00.779942] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:12:32.362 [2024-12-09 04:03:00.779967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:32.362 [2024-12-09 04:03:00.779985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:32.362 [2024-12-09 04:03:00.780004] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:32.362 [2024-12-09 04:03:00.780016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:32.362 [2024-12-09 04:03:00.780031] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:32.362 [2024-12-09 04:03:00.780042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:32.362 [2024-12-09 04:03:00.780057] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:32.362 [2024-12-09 04:03:00.780068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:32.362 [2024-12-09 04:03:00.780090] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:32.362 [2024-12-09 04:03:00.780100] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:32.362 [2024-12-09 04:03:00.780106] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:32.362 [2024-12-09 04:03:00.780112] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:32.362 [2024-12-09 04:03:00.780118] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:32.362 [2024-12-09 04:03:00.780127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:32.362 [2024-12-09 04:03:00.780138] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:32.362 [2024-12-09 04:03:00.780146] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:32.362 [2024-12-09 04:03:00.780152] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:32.363 [2024-12-09 04:03:00.780161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:32.363 [2024-12-09 04:03:00.780171] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:32.363 [2024-12-09 04:03:00.780179] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:32.363 [2024-12-09 04:03:00.780185] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:32.363 [2024-12-09 04:03:00.780193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:32.363 [2024-12-09 04:03:00.780205] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:32.363 [2024-12-09 04:03:00.780212] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:32.363 [2024-12-09 04:03:00.780218] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:32.363 [2024-12-09 04:03:00.780226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:32.363 [2024-12-09 04:03:00.780237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:32.363 [2024-12-09 04:03:00.780285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:32.363 [2024-12-09 04:03:00.780307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:32.363 [2024-12-09 04:03:00.780320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:32.363 ===================================================== 00:12:32.363 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:32.363 ===================================================== 00:12:32.363 Controller Capabilities/Features 00:12:32.363 ================================ 00:12:32.363 Vendor ID: 4e58 00:12:32.363 Subsystem Vendor ID: 4e58 00:12:32.363 Serial Number: SPDK1 00:12:32.363 Model Number: SPDK bdev Controller 00:12:32.363 Firmware Version: 25.01 00:12:32.363 Recommended Arb Burst: 6 00:12:32.363 IEEE OUI Identifier: 8d 6b 50 00:12:32.363 Multi-path I/O 00:12:32.363 May have multiple subsystem ports: Yes 00:12:32.363 May have multiple controllers: Yes 00:12:32.363 Associated with SR-IOV VF: No 00:12:32.363 Max Data Transfer Size: 131072 00:12:32.363 Max Number of Namespaces: 32 00:12:32.363 Max Number of I/O Queues: 127 00:12:32.363 NVMe Specification Version (VS): 1.3 00:12:32.363 NVMe Specification Version (Identify): 1.3 00:12:32.363 Maximum Queue Entries: 256 00:12:32.363 Contiguous Queues Required: Yes 00:12:32.363 Arbitration Mechanisms Supported 00:12:32.363 Weighted Round Robin: Not Supported 00:12:32.363 Vendor Specific: Not Supported 00:12:32.363 Reset Timeout: 15000 ms 00:12:32.363 Doorbell Stride: 4 bytes 00:12:32.363 NVM Subsystem Reset: Not Supported 00:12:32.363 Command Sets Supported 00:12:32.363 NVM Command Set: Supported 00:12:32.363 Boot Partition: Not Supported 00:12:32.363 Memory Page Size Minimum: 4096 bytes 00:12:32.363 Memory Page Size Maximum: 4096 bytes 00:12:32.363 Persistent Memory Region: Not Supported 00:12:32.363 Optional Asynchronous Events Supported 00:12:32.363 Namespace Attribute Notices: Supported 00:12:32.363 Firmware Activation Notices: Not Supported 00:12:32.363 ANA Change Notices: Not Supported 00:12:32.363 PLE Aggregate Log Change Notices: Not Supported 00:12:32.363 LBA Status Info Alert Notices: Not Supported 00:12:32.363 EGE Aggregate Log Change Notices: Not Supported 00:12:32.363 Normal NVM Subsystem Shutdown event: Not Supported 00:12:32.363 Zone Descriptor Change Notices: Not Supported 00:12:32.363 Discovery Log Change Notices: Not Supported 00:12:32.363 Controller Attributes 00:12:32.363 128-bit Host Identifier: Supported 00:12:32.363 Non-Operational Permissive Mode: Not Supported 00:12:32.363 NVM Sets: Not Supported 00:12:32.363 Read Recovery Levels: Not Supported 00:12:32.363 Endurance Groups: Not Supported 00:12:32.363 Predictable Latency Mode: Not Supported 00:12:32.363 Traffic Based Keep ALive: Not Supported 00:12:32.363 Namespace Granularity: Not Supported 00:12:32.363 SQ Associations: Not Supported 00:12:32.363 UUID List: Not Supported 00:12:32.363 Multi-Domain Subsystem: Not Supported 00:12:32.363 Fixed Capacity Management: Not Supported 00:12:32.363 Variable Capacity Management: Not Supported 00:12:32.363 Delete Endurance Group: Not Supported 00:12:32.363 Delete NVM Set: Not Supported 00:12:32.363 Extended LBA Formats Supported: Not Supported 00:12:32.363 Flexible Data Placement Supported: Not Supported 00:12:32.363 00:12:32.363 Controller Memory Buffer Support 00:12:32.363 ================================ 00:12:32.363 Supported: No 00:12:32.363 00:12:32.363 Persistent Memory Region Support 00:12:32.363 ================================ 00:12:32.363 Supported: No 00:12:32.363 00:12:32.363 Admin Command Set Attributes 00:12:32.363 ============================ 00:12:32.363 Security Send/Receive: Not Supported 00:12:32.363 Format NVM: Not Supported 00:12:32.363 Firmware Activate/Download: Not Supported 00:12:32.363 Namespace Management: Not Supported 00:12:32.363 Device Self-Test: Not Supported 00:12:32.363 Directives: Not Supported 00:12:32.363 NVMe-MI: Not Supported 00:12:32.363 Virtualization Management: Not Supported 00:12:32.363 Doorbell Buffer Config: Not Supported 00:12:32.363 Get LBA Status Capability: Not Supported 00:12:32.363 Command & Feature Lockdown Capability: Not Supported 00:12:32.363 Abort Command Limit: 4 00:12:32.363 Async Event Request Limit: 4 00:12:32.363 Number of Firmware Slots: N/A 00:12:32.363 Firmware Slot 1 Read-Only: N/A 00:12:32.363 Firmware Activation Without Reset: N/A 00:12:32.363 Multiple Update Detection Support: N/A 00:12:32.363 Firmware Update Granularity: No Information Provided 00:12:32.363 Per-Namespace SMART Log: No 00:12:32.363 Asymmetric Namespace Access Log Page: Not Supported 00:12:32.363 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:32.363 Command Effects Log Page: Supported 00:12:32.363 Get Log Page Extended Data: Supported 00:12:32.363 Telemetry Log Pages: Not Supported 00:12:32.363 Persistent Event Log Pages: Not Supported 00:12:32.363 Supported Log Pages Log Page: May Support 00:12:32.363 Commands Supported & Effects Log Page: Not Supported 00:12:32.363 Feature Identifiers & Effects Log Page:May Support 00:12:32.363 NVMe-MI Commands & Effects Log Page: May Support 00:12:32.363 Data Area 4 for Telemetry Log: Not Supported 00:12:32.363 Error Log Page Entries Supported: 128 00:12:32.363 Keep Alive: Supported 00:12:32.363 Keep Alive Granularity: 10000 ms 00:12:32.363 00:12:32.363 NVM Command Set Attributes 00:12:32.363 ========================== 00:12:32.363 Submission Queue Entry Size 00:12:32.363 Max: 64 00:12:32.363 Min: 64 00:12:32.363 Completion Queue Entry Size 00:12:32.363 Max: 16 00:12:32.363 Min: 16 00:12:32.363 Number of Namespaces: 32 00:12:32.363 Compare Command: Supported 00:12:32.363 Write Uncorrectable Command: Not Supported 00:12:32.363 Dataset Management Command: Supported 00:12:32.363 Write Zeroes Command: Supported 00:12:32.363 Set Features Save Field: Not Supported 00:12:32.363 Reservations: Not Supported 00:12:32.363 Timestamp: Not Supported 00:12:32.363 Copy: Supported 00:12:32.363 Volatile Write Cache: Present 00:12:32.363 Atomic Write Unit (Normal): 1 00:12:32.363 Atomic Write Unit (PFail): 1 00:12:32.363 Atomic Compare & Write Unit: 1 00:12:32.363 Fused Compare & Write: Supported 00:12:32.363 Scatter-Gather List 00:12:32.363 SGL Command Set: Supported (Dword aligned) 00:12:32.363 SGL Keyed: Not Supported 00:12:32.363 SGL Bit Bucket Descriptor: Not Supported 00:12:32.363 SGL Metadata Pointer: Not Supported 00:12:32.363 Oversized SGL: Not Supported 00:12:32.363 SGL Metadata Address: Not Supported 00:12:32.363 SGL Offset: Not Supported 00:12:32.363 Transport SGL Data Block: Not Supported 00:12:32.363 Replay Protected Memory Block: Not Supported 00:12:32.363 00:12:32.363 Firmware Slot Information 00:12:32.363 ========================= 00:12:32.363 Active slot: 1 00:12:32.363 Slot 1 Firmware Revision: 25.01 00:12:32.363 00:12:32.363 00:12:32.363 Commands Supported and Effects 00:12:32.363 ============================== 00:12:32.363 Admin Commands 00:12:32.363 -------------- 00:12:32.363 Get Log Page (02h): Supported 00:12:32.363 Identify (06h): Supported 00:12:32.363 Abort (08h): Supported 00:12:32.363 Set Features (09h): Supported 00:12:32.363 Get Features (0Ah): Supported 00:12:32.363 Asynchronous Event Request (0Ch): Supported 00:12:32.363 Keep Alive (18h): Supported 00:12:32.363 I/O Commands 00:12:32.363 ------------ 00:12:32.363 Flush (00h): Supported LBA-Change 00:12:32.363 Write (01h): Supported LBA-Change 00:12:32.363 Read (02h): Supported 00:12:32.363 Compare (05h): Supported 00:12:32.363 Write Zeroes (08h): Supported LBA-Change 00:12:32.363 Dataset Management (09h): Supported LBA-Change 00:12:32.364 Copy (19h): Supported LBA-Change 00:12:32.364 00:12:32.364 Error Log 00:12:32.364 ========= 00:12:32.364 00:12:32.364 Arbitration 00:12:32.364 =========== 00:12:32.364 Arbitration Burst: 1 00:12:32.364 00:12:32.364 Power Management 00:12:32.364 ================ 00:12:32.364 Number of Power States: 1 00:12:32.364 Current Power State: Power State #0 00:12:32.364 Power State #0: 00:12:32.364 Max Power: 0.00 W 00:12:32.364 Non-Operational State: Operational 00:12:32.364 Entry Latency: Not Reported 00:12:32.364 Exit Latency: Not Reported 00:12:32.364 Relative Read Throughput: 0 00:12:32.364 Relative Read Latency: 0 00:12:32.364 Relative Write Throughput: 0 00:12:32.364 Relative Write Latency: 0 00:12:32.364 Idle Power: Not Reported 00:12:32.364 Active Power: Not Reported 00:12:32.364 Non-Operational Permissive Mode: Not Supported 00:12:32.364 00:12:32.364 Health Information 00:12:32.364 ================== 00:12:32.364 Critical Warnings: 00:12:32.364 Available Spare Space: OK 00:12:32.364 Temperature: OK 00:12:32.364 Device Reliability: OK 00:12:32.364 Read Only: No 00:12:32.364 Volatile Memory Backup: OK 00:12:32.364 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:32.364 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:32.364 Available Spare: 0% 00:12:32.364 Available Sp[2024-12-09 04:03:00.780444] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:32.364 [2024-12-09 04:03:00.780462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:32.364 [2024-12-09 04:03:00.780506] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:12:32.364 [2024-12-09 04:03:00.780525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.364 [2024-12-09 04:03:00.780537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.364 [2024-12-09 04:03:00.780547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.364 [2024-12-09 04:03:00.780557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:32.364 [2024-12-09 04:03:00.784284] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:32.364 [2024-12-09 04:03:00.784306] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:32.364 [2024-12-09 04:03:00.784973] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:32.364 [2024-12-09 04:03:00.785061] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:12:32.364 [2024-12-09 04:03:00.785075] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:12:32.364 [2024-12-09 04:03:00.785978] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:32.364 [2024-12-09 04:03:00.786002] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:12:32.364 [2024-12-09 04:03:00.786056] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:32.364 [2024-12-09 04:03:00.788021] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:32.364 are Threshold: 0% 00:12:32.364 Life Percentage Used: 0% 00:12:32.364 Data Units Read: 0 00:12:32.364 Data Units Written: 0 00:12:32.364 Host Read Commands: 0 00:12:32.364 Host Write Commands: 0 00:12:32.364 Controller Busy Time: 0 minutes 00:12:32.364 Power Cycles: 0 00:12:32.364 Power On Hours: 0 hours 00:12:32.364 Unsafe Shutdowns: 0 00:12:32.364 Unrecoverable Media Errors: 0 00:12:32.364 Lifetime Error Log Entries: 0 00:12:32.364 Warning Temperature Time: 0 minutes 00:12:32.364 Critical Temperature Time: 0 minutes 00:12:32.364 00:12:32.364 Number of Queues 00:12:32.364 ================ 00:12:32.364 Number of I/O Submission Queues: 127 00:12:32.364 Number of I/O Completion Queues: 127 00:12:32.364 00:12:32.364 Active Namespaces 00:12:32.364 ================= 00:12:32.364 Namespace ID:1 00:12:32.364 Error Recovery Timeout: Unlimited 00:12:32.364 Command Set Identifier: NVM (00h) 00:12:32.364 Deallocate: Supported 00:12:32.364 Deallocated/Unwritten Error: Not Supported 00:12:32.364 Deallocated Read Value: Unknown 00:12:32.364 Deallocate in Write Zeroes: Not Supported 00:12:32.364 Deallocated Guard Field: 0xFFFF 00:12:32.364 Flush: Supported 00:12:32.364 Reservation: Supported 00:12:32.364 Namespace Sharing Capabilities: Multiple Controllers 00:12:32.364 Size (in LBAs): 131072 (0GiB) 00:12:32.364 Capacity (in LBAs): 131072 (0GiB) 00:12:32.364 Utilization (in LBAs): 131072 (0GiB) 00:12:32.364 NGUID: 07D9A539FF234D2C94FF04FF7F2B2437 00:12:32.364 UUID: 07d9a539-ff23-4d2c-94ff-04ff7f2b2437 00:12:32.364 Thin Provisioning: Not Supported 00:12:32.364 Per-NS Atomic Units: Yes 00:12:32.364 Atomic Boundary Size (Normal): 0 00:12:32.364 Atomic Boundary Size (PFail): 0 00:12:32.364 Atomic Boundary Offset: 0 00:12:32.364 Maximum Single Source Range Length: 65535 00:12:32.364 Maximum Copy Length: 65535 00:12:32.364 Maximum Source Range Count: 1 00:12:32.364 NGUID/EUI64 Never Reused: No 00:12:32.364 Namespace Write Protected: No 00:12:32.364 Number of LBA Formats: 1 00:12:32.364 Current LBA Format: LBA Format #00 00:12:32.364 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:32.364 00:12:32.364 04:03:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:32.622 [2024-12-09 04:03:01.042346] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:37.888 Initializing NVMe Controllers 00:12:37.888 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:37.888 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:37.888 Initialization complete. Launching workers. 00:12:37.888 ======================================================== 00:12:37.888 Latency(us) 00:12:37.888 Device Information : IOPS MiB/s Average min max 00:12:37.888 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 29639.85 115.78 4317.78 1249.22 11377.03 00:12:37.888 ======================================================== 00:12:37.888 Total : 29639.85 115.78 4317.78 1249.22 11377.03 00:12:37.888 00:12:37.888 [2024-12-09 04:03:06.063419] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:37.888 04:03:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:37.888 [2024-12-09 04:03:06.330724] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:43.152 Initializing NVMe Controllers 00:12:43.152 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:43.152 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:43.152 Initialization complete. Launching workers. 00:12:43.152 ======================================================== 00:12:43.152 Latency(us) 00:12:43.152 Device Information : IOPS MiB/s Average min max 00:12:43.152 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16042.50 62.67 7978.09 6984.49 8117.26 00:12:43.152 ======================================================== 00:12:43.152 Total : 16042.50 62.67 7978.09 6984.49 8117.26 00:12:43.152 00:12:43.152 [2024-12-09 04:03:11.368374] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:43.152 04:03:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:43.153 [2024-12-09 04:03:11.606577] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:48.418 [2024-12-09 04:03:16.720860] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:48.418 Initializing NVMe Controllers 00:12:48.418 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:48.418 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:48.418 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:48.418 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:48.418 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:48.418 Initialization complete. Launching workers. 00:12:48.418 Starting thread on core 2 00:12:48.418 Starting thread on core 3 00:12:48.418 Starting thread on core 1 00:12:48.418 04:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:48.675 [2024-12-09 04:03:17.054807] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:51.951 [2024-12-09 04:03:20.117691] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:51.951 Initializing NVMe Controllers 00:12:51.951 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:51.951 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:51.951 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:51.951 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:51.951 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:51.951 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:51.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:51.951 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:51.951 Initialization complete. Launching workers. 00:12:51.951 Starting thread on core 1 with urgent priority queue 00:12:51.951 Starting thread on core 2 with urgent priority queue 00:12:51.951 Starting thread on core 3 with urgent priority queue 00:12:51.951 Starting thread on core 0 with urgent priority queue 00:12:51.951 SPDK bdev Controller (SPDK1 ) core 0: 4473.67 IO/s 22.35 secs/100000 ios 00:12:51.951 SPDK bdev Controller (SPDK1 ) core 1: 5294.00 IO/s 18.89 secs/100000 ios 00:12:51.951 SPDK bdev Controller (SPDK1 ) core 2: 5774.00 IO/s 17.32 secs/100000 ios 00:12:51.951 SPDK bdev Controller (SPDK1 ) core 3: 5778.33 IO/s 17.31 secs/100000 ios 00:12:51.951 ======================================================== 00:12:51.951 00:12:51.951 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:51.951 [2024-12-09 04:03:20.430860] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:51.951 Initializing NVMe Controllers 00:12:51.951 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:51.951 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:51.951 Namespace ID: 1 size: 0GB 00:12:51.951 Initialization complete. 00:12:51.951 INFO: using host memory buffer for IO 00:12:51.951 Hello world! 00:12:51.951 [2024-12-09 04:03:20.464475] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:51.951 04:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:52.207 [2024-12-09 04:03:20.770161] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:53.580 Initializing NVMe Controllers 00:12:53.580 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.580 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:53.580 Initialization complete. Launching workers. 00:12:53.580 submit (in ns) avg, min, max = 8524.7, 3516.7, 4016075.6 00:12:53.580 complete (in ns) avg, min, max = 26766.8, 2062.2, 4014678.9 00:12:53.580 00:12:53.580 Submit histogram 00:12:53.580 ================ 00:12:53.580 Range in us Cumulative Count 00:12:53.580 3.508 - 3.532: 0.1937% ( 24) 00:12:53.580 3.532 - 3.556: 0.9766% ( 97) 00:12:53.580 3.556 - 3.579: 2.8571% ( 233) 00:12:53.580 3.579 - 3.603: 6.5779% ( 461) 00:12:53.580 3.603 - 3.627: 12.7119% ( 760) 00:12:53.580 3.627 - 3.650: 20.3713% ( 949) 00:12:53.580 3.650 - 3.674: 28.7732% ( 1041) 00:12:53.580 3.674 - 3.698: 36.8200% ( 997) 00:12:53.580 3.698 - 3.721: 44.1566% ( 909) 00:12:53.580 3.721 - 3.745: 49.8951% ( 711) 00:12:53.580 3.745 - 3.769: 54.3745% ( 555) 00:12:53.580 3.769 - 3.793: 58.3293% ( 490) 00:12:53.580 3.793 - 3.816: 61.8160% ( 432) 00:12:53.580 3.816 - 3.840: 65.4237% ( 447) 00:12:53.580 3.840 - 3.864: 69.4108% ( 494) 00:12:53.580 3.864 - 3.887: 73.7369% ( 536) 00:12:53.580 3.887 - 3.911: 77.9742% ( 525) 00:12:53.580 3.911 - 3.935: 81.5981% ( 449) 00:12:53.580 3.935 - 3.959: 84.3180% ( 337) 00:12:53.580 3.959 - 3.982: 86.4891% ( 269) 00:12:53.580 3.982 - 4.006: 88.1598% ( 207) 00:12:53.580 4.006 - 4.030: 89.4108% ( 155) 00:12:53.580 4.030 - 4.053: 90.5327% ( 139) 00:12:53.580 4.053 - 4.077: 91.6303% ( 136) 00:12:53.581 4.077 - 4.101: 92.6796% ( 130) 00:12:53.581 4.101 - 4.124: 93.5432% ( 107) 00:12:53.581 4.124 - 4.148: 94.3584% ( 101) 00:12:53.581 4.148 - 4.172: 94.9475% ( 73) 00:12:53.581 4.172 - 4.196: 95.5367% ( 73) 00:12:53.581 4.196 - 4.219: 95.8918% ( 44) 00:12:53.581 4.219 - 4.243: 96.1985% ( 38) 00:12:53.581 4.243 - 4.267: 96.3519% ( 19) 00:12:53.581 4.267 - 4.290: 96.5295% ( 22) 00:12:53.581 4.290 - 4.314: 96.6263% ( 12) 00:12:53.581 4.314 - 4.338: 96.7554% ( 16) 00:12:53.581 4.338 - 4.361: 96.8604% ( 13) 00:12:53.581 4.361 - 4.385: 96.9572% ( 12) 00:12:53.581 4.385 - 4.409: 97.0621% ( 13) 00:12:53.581 4.409 - 4.433: 97.1186% ( 7) 00:12:53.581 4.433 - 4.456: 97.1994% ( 10) 00:12:53.581 4.456 - 4.480: 97.2236% ( 3) 00:12:53.581 4.480 - 4.504: 97.2397% ( 2) 00:12:53.581 4.504 - 4.527: 97.2639% ( 3) 00:12:53.581 4.527 - 4.551: 97.2962% ( 4) 00:12:53.581 4.551 - 4.575: 97.3043% ( 1) 00:12:53.581 4.622 - 4.646: 97.3285% ( 3) 00:12:53.581 4.646 - 4.670: 97.3366% ( 1) 00:12:53.581 4.693 - 4.717: 97.3527% ( 2) 00:12:53.581 4.717 - 4.741: 97.3608% ( 1) 00:12:53.581 4.741 - 4.764: 97.3769% ( 2) 00:12:53.581 4.764 - 4.788: 97.4092% ( 4) 00:12:53.581 4.788 - 4.812: 97.4334% ( 3) 00:12:53.581 4.812 - 4.836: 97.4496% ( 2) 00:12:53.581 4.836 - 4.859: 97.4899% ( 5) 00:12:53.581 4.859 - 4.883: 97.5061% ( 2) 00:12:53.581 4.883 - 4.907: 97.5545% ( 6) 00:12:53.581 4.907 - 4.930: 97.5787% ( 3) 00:12:53.581 4.930 - 4.954: 97.6190% ( 5) 00:12:53.581 4.954 - 4.978: 97.6352% ( 2) 00:12:53.581 4.978 - 5.001: 97.6755% ( 5) 00:12:53.581 5.001 - 5.025: 97.7240% ( 6) 00:12:53.581 5.025 - 5.049: 97.7401% ( 2) 00:12:53.581 5.049 - 5.073: 97.7643% ( 3) 00:12:53.581 5.073 - 5.096: 97.8047% ( 5) 00:12:53.581 5.096 - 5.120: 97.8289% ( 3) 00:12:53.581 5.120 - 5.144: 97.8531% ( 3) 00:12:53.581 5.144 - 5.167: 97.8692% ( 2) 00:12:53.581 5.167 - 5.191: 97.8854% ( 2) 00:12:53.581 5.191 - 5.215: 97.8935% ( 1) 00:12:53.581 5.215 - 5.239: 97.9338% ( 5) 00:12:53.581 5.239 - 5.262: 97.9500% ( 2) 00:12:53.581 5.262 - 5.286: 97.9742% ( 3) 00:12:53.581 5.310 - 5.333: 98.0065% ( 4) 00:12:53.581 5.333 - 5.357: 98.0145% ( 1) 00:12:53.581 5.357 - 5.381: 98.0307% ( 2) 00:12:53.581 5.381 - 5.404: 98.0630% ( 4) 00:12:53.581 5.404 - 5.428: 98.0710% ( 1) 00:12:53.581 5.452 - 5.476: 98.0872% ( 2) 00:12:53.581 5.476 - 5.499: 98.0952% ( 1) 00:12:53.581 5.523 - 5.547: 98.1033% ( 1) 00:12:53.581 5.760 - 5.784: 98.1114% ( 1) 00:12:53.581 6.021 - 6.044: 98.1195% ( 1) 00:12:53.581 6.044 - 6.068: 98.1275% ( 1) 00:12:53.581 6.210 - 6.258: 98.1437% ( 2) 00:12:53.581 6.258 - 6.305: 98.1517% ( 1) 00:12:53.581 6.921 - 6.969: 98.1598% ( 1) 00:12:53.581 7.159 - 7.206: 98.1679% ( 1) 00:12:53.581 7.206 - 7.253: 98.1840% ( 2) 00:12:53.581 7.253 - 7.301: 98.1921% ( 1) 00:12:53.581 7.301 - 7.348: 98.2002% ( 1) 00:12:53.581 7.348 - 7.396: 98.2163% ( 2) 00:12:53.581 7.396 - 7.443: 98.2244% ( 1) 00:12:53.581 7.443 - 7.490: 98.2324% ( 1) 00:12:53.581 7.538 - 7.585: 98.2486% ( 2) 00:12:53.581 7.585 - 7.633: 98.2567% ( 1) 00:12:53.581 7.633 - 7.680: 98.2647% ( 1) 00:12:53.581 7.680 - 7.727: 98.2728% ( 1) 00:12:53.581 7.727 - 7.775: 98.2809% ( 1) 00:12:53.581 7.775 - 7.822: 98.2889% ( 1) 00:12:53.581 7.870 - 7.917: 98.3051% ( 2) 00:12:53.581 7.917 - 7.964: 98.3132% ( 1) 00:12:53.581 7.964 - 8.012: 98.3212% ( 1) 00:12:53.581 8.012 - 8.059: 98.3535% ( 4) 00:12:53.581 8.107 - 8.154: 98.3616% ( 1) 00:12:53.581 8.154 - 8.201: 98.3858% ( 3) 00:12:53.581 8.296 - 8.344: 98.4019% ( 2) 00:12:53.581 8.344 - 8.391: 98.4100% ( 1) 00:12:53.581 8.391 - 8.439: 98.4342% ( 3) 00:12:53.581 8.533 - 8.581: 98.4423% ( 1) 00:12:53.581 8.581 - 8.628: 98.4504% ( 1) 00:12:53.581 8.723 - 8.770: 98.4665% ( 2) 00:12:53.581 8.865 - 8.913: 98.4746% ( 1) 00:12:53.581 9.292 - 9.339: 98.4907% ( 2) 00:12:53.581 10.003 - 10.050: 98.4988% ( 1) 00:12:53.581 10.050 - 10.098: 98.5069% ( 1) 00:12:53.581 10.098 - 10.145: 98.5149% ( 1) 00:12:53.581 10.145 - 10.193: 98.5230% ( 1) 00:12:53.581 10.477 - 10.524: 98.5311% ( 1) 00:12:53.581 10.572 - 10.619: 98.5391% ( 1) 00:12:53.581 10.619 - 10.667: 98.5472% ( 1) 00:12:53.581 10.714 - 10.761: 98.5553% ( 1) 00:12:53.581 10.904 - 10.951: 98.5634% ( 1) 00:12:53.581 11.093 - 11.141: 98.5714% ( 1) 00:12:53.581 11.710 - 11.757: 98.5795% ( 1) 00:12:53.581 11.852 - 11.899: 98.5876% ( 1) 00:12:53.581 12.136 - 12.231: 98.6037% ( 2) 00:12:53.581 12.326 - 12.421: 98.6118% ( 1) 00:12:53.581 12.421 - 12.516: 98.6199% ( 1) 00:12:53.581 12.610 - 12.705: 98.6279% ( 1) 00:12:53.581 12.705 - 12.800: 98.6360% ( 1) 00:12:53.581 12.990 - 13.084: 98.6441% ( 1) 00:12:53.581 13.369 - 13.464: 98.6521% ( 1) 00:12:53.581 13.843 - 13.938: 98.6602% ( 1) 00:12:53.581 14.033 - 14.127: 98.6764% ( 2) 00:12:53.581 14.412 - 14.507: 98.6844% ( 1) 00:12:53.581 14.696 - 14.791: 98.7006% ( 2) 00:12:53.581 14.791 - 14.886: 98.7086% ( 1) 00:12:53.581 14.886 - 14.981: 98.7167% ( 1) 00:12:53.581 16.308 - 16.403: 98.7248% ( 1) 00:12:53.581 16.972 - 17.067: 98.7328% ( 1) 00:12:53.581 17.161 - 17.256: 98.7409% ( 1) 00:12:53.581 17.256 - 17.351: 98.7571% ( 2) 00:12:53.581 17.351 - 17.446: 98.7893% ( 4) 00:12:53.581 17.541 - 17.636: 98.8378% ( 6) 00:12:53.581 17.636 - 17.730: 98.8701% ( 4) 00:12:53.581 17.730 - 17.825: 98.9104% ( 5) 00:12:53.581 17.825 - 17.920: 98.9750% ( 8) 00:12:53.581 17.920 - 18.015: 99.0476% ( 9) 00:12:53.581 18.015 - 18.110: 99.1283% ( 10) 00:12:53.581 18.110 - 18.204: 99.2010% ( 9) 00:12:53.581 18.204 - 18.299: 99.2413% ( 5) 00:12:53.581 18.299 - 18.394: 99.3220% ( 10) 00:12:53.581 18.394 - 18.489: 99.3462% ( 3) 00:12:53.581 18.489 - 18.584: 99.3785% ( 4) 00:12:53.581 18.584 - 18.679: 99.4431% ( 8) 00:12:53.581 18.679 - 18.773: 99.5077% ( 8) 00:12:53.581 18.773 - 18.868: 99.5722% ( 8) 00:12:53.581 18.868 - 18.963: 99.6126% ( 5) 00:12:53.581 18.963 - 19.058: 99.6368% ( 3) 00:12:53.581 19.058 - 19.153: 99.6610% ( 3) 00:12:53.581 19.153 - 19.247: 99.6772% ( 2) 00:12:53.581 19.247 - 19.342: 99.6852% ( 1) 00:12:53.581 19.342 - 19.437: 99.6933% ( 1) 00:12:53.581 19.437 - 19.532: 99.7175% ( 3) 00:12:53.581 19.532 - 19.627: 99.7417% ( 3) 00:12:53.581 19.627 - 19.721: 99.7821% ( 5) 00:12:53.581 19.721 - 19.816: 99.7982% ( 2) 00:12:53.581 19.816 - 19.911: 99.8063% ( 1) 00:12:53.581 20.006 - 20.101: 99.8224% ( 2) 00:12:53.581 20.290 - 20.385: 99.8305% ( 1) 00:12:53.581 20.859 - 20.954: 99.8386% ( 1) 00:12:53.581 22.945 - 23.040: 99.8467% ( 1) 00:12:53.581 23.893 - 23.988: 99.8547% ( 1) 00:12:53.581 25.790 - 25.979: 99.8628% ( 1) 00:12:53.581 27.876 - 28.065: 99.8789% ( 2) 00:12:53.581 29.393 - 29.582: 99.8870% ( 1) 00:12:53.581 3980.705 - 4004.978: 99.9677% ( 10) 00:12:53.581 4004.978 - 4029.250: 100.0000% ( 4) 00:12:53.581 00:12:53.581 Complete histogram 00:12:53.581 ================== 00:12:53.581 Range in us Cumulative Count 00:12:53.581 2.062 - 2.074: 10.7748% ( 1335) 00:12:53.581 2.074 - 2.086: 45.7546% ( 4334) 00:12:53.581 2.086 - 2.098: 48.1195% ( 293) 00:12:53.581 2.098 - 2.110: 52.7684% ( 576) 00:12:53.581 2.110 - 2.121: 58.2002% ( 673) 00:12:53.581 2.121 - 2.133: 59.4108% ( 150) 00:12:53.581 2.133 - 2.145: 68.1114% ( 1078) 00:12:53.581 2.145 - 2.157: 74.7538% ( 823) 00:12:53.581 2.157 - 2.169: 75.6336% ( 109) 00:12:53.581 2.169 - 2.181: 78.3616% ( 338) 00:12:53.581 2.181 - 2.193: 79.7417% ( 171) 00:12:53.581 2.193 - 2.204: 80.3471% ( 75) 00:12:53.581 2.204 - 2.216: 83.7853% ( 426) 00:12:53.581 2.216 - 2.228: 87.9822% ( 520) 00:12:53.581 2.228 - 2.240: 90.1130% ( 264) 00:12:53.581 2.240 - 2.252: 91.6788% ( 194) 00:12:53.581 2.252 - 2.264: 92.5182% ( 104) 00:12:53.581 2.264 - 2.276: 92.7119% ( 24) 00:12:53.581 2.276 - 2.287: 93.1719% ( 57) 00:12:53.582 2.287 - 2.299: 93.9467% ( 96) 00:12:53.582 2.299 - 2.311: 94.7700% ( 102) 00:12:53.582 2.311 - 2.323: 94.9879% ( 27) 00:12:53.582 2.323 - 2.335: 95.0363% ( 6) 00:12:53.582 2.335 - 2.347: 95.0605% ( 3) 00:12:53.582 2.347 - 2.359: 95.1332% ( 9) 00:12:53.582 2.359 - 2.370: 95.3914% ( 32) 00:12:53.582 2.370 - 2.382: 95.9241% ( 66) 00:12:53.582 2.382 - 2.394: 96.4891% ( 70) 00:12:53.582 2.394 - 2.406: 96.8281% ( 42) 00:12:53.582 2.406 - 2.418: 97.0299% ( 25) 00:12:53.582 2.418 - 2.430: 97.1994% ( 21) 00:12:53.582 2.430 - 2.441: 97.3850% ( 23) 00:12:53.582 2.441 - 2.453: 97.5706% ( 23) 00:12:53.582 2.453 - 2.465: 97.6917% ( 15) 00:12:53.582 2.465 - 2.477: 97.7966% ( 13) 00:12:53.582 2.477 - 2.489: 97.8854% ( 11) 00:12:53.582 2.489 - 2.501: 97.9903% ( 13) 00:12:53.582 2.501 - 2.513: 98.0630% ( 9) 00:12:53.582 2.513 - 2.524: 98.0952% ( 4) 00:12:53.582 2.524 - 2.536: 98.1517% ( 7) 00:12:53.582 2.536 - 2.548: 98.2002% ( 6) 00:12:53.582 2.548 - 2.560: 98.2163% ( 2) 00:12:53.582 2.560 - 2.572: 98.2405% ( 3) 00:12:53.582 2.572 - 2.584: 98.2486% ( 1) 00:12:53.582 2.607 - 2.619: 98.2567% ( 1) 00:12:53.582 2.631 - 2.643: 98.2647% ( 1) 00:12:53.582 2.667 - 2.679: 98.2728% ( 1) 00:12:53.582 2.690 - 2.702: 98.2889% ( 2) 00:12:53.582 2.773 - 2.785: 98.2970% ( 1) 00:12:53.582 2.785 - 2.797: 98.3051% ( 1) 00:12:53.582 2.939 - 2.951: 98.3132% ( 1) 00:12:53.582 3.129 - 3.153: 98.3212% ( 1) 00:12:53.582 3.247 - 3.271: 98.3293% ( 1) 00:12:53.582 3.271 - 3.295: 98.3374% ( 1) 00:12:53.582 3.295 - 3.319: 98.3454% ( 1) 00:12:53.582 3.319 - 3.342: 98.3535% ( 1) 00:12:53.582 3.366 - 3.390: 98.3616% ( 1) 00:12:53.582 3.390 - 3.413: 98.3697% ( 1) 00:12:53.582 3.413 - 3.437: 98.3939% ( 3) 00:12:53.582 3.437 - 3.461: 98.4181% ( 3) 00:12:53.582 3.461 - 3.484: 98.4423% ( 3) 00:12:53.582 3.484 - 3.508: 98.4504% ( 1) 00:12:53.582 3.579 - 3.603: 98.4584% ( 1) 00:12:53.582 3.650 - 3.674: 98.4665% ( 1) 00:12:53.582 3.698 - 3.721: 98.4746% ( 1) 00:12:53.582 3.769 - 3.793: 98.4826% ( 1) 00:12:53.582 3.793 - 3.816: 98.4907% ( 1) 00:12:53.582 3.840 - 3.864: 98.4988% ( 1) 00:12:53.582 3.935 - 3.959: 98.5069% ( 1) 00:12:53.582 4.006 - 4.030: 98.5149% ( 1) 00:12:53.582 4.243 - 4.267: 98.5230% ( 1) 00:12:53.582 5.096 - 5.120: 98.5311% ( 1) 00:12:53.582 5.404 - 5.428: 98.5391% ( 1) 00:12:53.582 5.452 - 5.476: 9[2024-12-09 04:03:21.797447] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:53.582 8.5472% ( 1) 00:12:53.582 5.499 - 5.523: 98.5553% ( 1) 00:12:53.582 5.547 - 5.570: 98.5634% ( 1) 00:12:53.582 5.641 - 5.665: 98.5714% ( 1) 00:12:53.582 5.760 - 5.784: 98.5795% ( 1) 00:12:53.582 5.879 - 5.902: 98.5876% ( 1) 00:12:53.582 5.973 - 5.997: 98.5956% ( 1) 00:12:53.582 6.021 - 6.044: 98.6037% ( 1) 00:12:53.582 6.163 - 6.210: 98.6118% ( 1) 00:12:53.582 6.400 - 6.447: 98.6199% ( 1) 00:12:53.582 6.447 - 6.495: 98.6279% ( 1) 00:12:53.582 6.684 - 6.732: 98.6360% ( 1) 00:12:53.582 6.827 - 6.874: 98.6441% ( 1) 00:12:53.582 7.064 - 7.111: 98.6521% ( 1) 00:12:53.582 7.159 - 7.206: 98.6602% ( 1) 00:12:53.582 7.206 - 7.253: 98.6683% ( 1) 00:12:53.582 7.348 - 7.396: 98.6764% ( 1) 00:12:53.582 7.490 - 7.538: 98.6844% ( 1) 00:12:53.582 7.822 - 7.870: 98.6925% ( 1) 00:12:53.582 7.917 - 7.964: 98.7006% ( 1) 00:12:53.582 8.439 - 8.486: 98.7086% ( 1) 00:12:53.582 11.899 - 11.947: 98.7167% ( 1) 00:12:53.582 15.360 - 15.455: 98.7248% ( 1) 00:12:53.582 15.455 - 15.550: 98.7328% ( 1) 00:12:53.582 15.550 - 15.644: 98.7409% ( 1) 00:12:53.582 15.644 - 15.739: 98.7490% ( 1) 00:12:53.582 15.739 - 15.834: 98.7732% ( 3) 00:12:53.582 15.834 - 15.929: 98.7974% ( 3) 00:12:53.582 15.929 - 16.024: 98.8136% ( 2) 00:12:53.582 16.024 - 16.119: 98.8539% ( 5) 00:12:53.582 16.119 - 16.213: 98.8862% ( 4) 00:12:53.582 16.213 - 16.308: 98.9104% ( 3) 00:12:53.582 16.308 - 16.403: 98.9346% ( 3) 00:12:53.582 16.403 - 16.498: 98.9588% ( 3) 00:12:53.582 16.498 - 16.593: 99.0153% ( 7) 00:12:53.582 16.593 - 16.687: 99.0880% ( 9) 00:12:53.582 16.687 - 16.782: 99.1445% ( 7) 00:12:53.582 16.782 - 16.877: 99.1687% ( 3) 00:12:53.582 16.877 - 16.972: 99.2090% ( 5) 00:12:53.582 16.972 - 17.067: 99.2252% ( 2) 00:12:53.582 17.067 - 17.161: 99.2333% ( 1) 00:12:53.582 17.161 - 17.256: 99.2494% ( 2) 00:12:53.582 17.256 - 17.351: 99.2655% ( 2) 00:12:53.582 17.351 - 17.446: 99.2978% ( 4) 00:12:53.582 17.446 - 17.541: 99.3140% ( 2) 00:12:53.582 17.636 - 17.730: 99.3220% ( 1) 00:12:53.582 17.730 - 17.825: 99.3301% ( 1) 00:12:53.582 17.920 - 18.015: 99.3382% ( 1) 00:12:53.582 18.204 - 18.299: 99.3462% ( 1) 00:12:53.582 18.963 - 19.058: 99.3543% ( 1) 00:12:53.582 20.385 - 20.480: 99.3624% ( 1) 00:12:53.582 25.600 - 25.790: 99.3705% ( 1) 00:12:53.582 40.770 - 40.960: 99.3785% ( 1) 00:12:53.582 163.840 - 164.599: 99.3866% ( 1) 00:12:53.582 3592.344 - 3616.616: 99.3947% ( 1) 00:12:53.582 3980.705 - 4004.978: 99.8305% ( 54) 00:12:53.582 4004.978 - 4029.250: 100.0000% ( 21) 00:12:53.582 00:12:53.582 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:53.582 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:53.582 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:53.582 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:53.582 04:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:53.582 [ 00:12:53.582 { 00:12:53.582 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:53.582 "subtype": "Discovery", 00:12:53.582 "listen_addresses": [], 00:12:53.582 "allow_any_host": true, 00:12:53.582 "hosts": [] 00:12:53.582 }, 00:12:53.582 { 00:12:53.582 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:53.582 "subtype": "NVMe", 00:12:53.582 "listen_addresses": [ 00:12:53.582 { 00:12:53.582 "trtype": "VFIOUSER", 00:12:53.582 "adrfam": "IPv4", 00:12:53.582 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:53.582 "trsvcid": "0" 00:12:53.582 } 00:12:53.582 ], 00:12:53.582 "allow_any_host": true, 00:12:53.582 "hosts": [], 00:12:53.582 "serial_number": "SPDK1", 00:12:53.582 "model_number": "SPDK bdev Controller", 00:12:53.582 "max_namespaces": 32, 00:12:53.582 "min_cntlid": 1, 00:12:53.582 "max_cntlid": 65519, 00:12:53.582 "namespaces": [ 00:12:53.582 { 00:12:53.582 "nsid": 1, 00:12:53.582 "bdev_name": "Malloc1", 00:12:53.582 "name": "Malloc1", 00:12:53.582 "nguid": "07D9A539FF234D2C94FF04FF7F2B2437", 00:12:53.582 "uuid": "07d9a539-ff23-4d2c-94ff-04ff7f2b2437" 00:12:53.582 } 00:12:53.582 ] 00:12:53.582 }, 00:12:53.582 { 00:12:53.582 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:53.582 "subtype": "NVMe", 00:12:53.582 "listen_addresses": [ 00:12:53.582 { 00:12:53.582 "trtype": "VFIOUSER", 00:12:53.582 "adrfam": "IPv4", 00:12:53.582 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:53.582 "trsvcid": "0" 00:12:53.582 } 00:12:53.582 ], 00:12:53.582 "allow_any_host": true, 00:12:53.582 "hosts": [], 00:12:53.582 "serial_number": "SPDK2", 00:12:53.582 "model_number": "SPDK bdev Controller", 00:12:53.582 "max_namespaces": 32, 00:12:53.582 "min_cntlid": 1, 00:12:53.582 "max_cntlid": 65519, 00:12:53.582 "namespaces": [ 00:12:53.582 { 00:12:53.582 "nsid": 1, 00:12:53.582 "bdev_name": "Malloc2", 00:12:53.582 "name": "Malloc2", 00:12:53.582 "nguid": "1F14A502DA0A41F2920C11B007901159", 00:12:53.582 "uuid": "1f14a502-da0a-41f2-920c-11b007901159" 00:12:53.582 } 00:12:53.582 ] 00:12:53.582 } 00:12:53.582 ] 00:12:53.582 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:53.582 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=207599 00:12:53.583 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:53.583 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:53.583 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:12:53.583 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:53.583 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:12:53.583 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:12:53.583 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:12:53.840 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:53.840 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:12:53.840 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:12:53.840 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:12:53.840 [2024-12-09 04:03:22.298771] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:53.840 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:53.840 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:53.840 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:12:53.840 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:53.840 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:54.098 Malloc3 00:12:54.098 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:54.663 [2024-12-09 04:03:22.954932] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:54.663 04:03:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:54.663 Asynchronous Event Request test 00:12:54.663 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:54.663 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:54.663 Registering asynchronous event callbacks... 00:12:54.663 Starting namespace attribute notice tests for all controllers... 00:12:54.663 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:54.663 aer_cb - Changed Namespace 00:12:54.663 Cleaning up... 00:12:54.663 [ 00:12:54.663 { 00:12:54.663 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:54.663 "subtype": "Discovery", 00:12:54.663 "listen_addresses": [], 00:12:54.663 "allow_any_host": true, 00:12:54.663 "hosts": [] 00:12:54.663 }, 00:12:54.663 { 00:12:54.663 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:54.664 "subtype": "NVMe", 00:12:54.664 "listen_addresses": [ 00:12:54.664 { 00:12:54.664 "trtype": "VFIOUSER", 00:12:54.664 "adrfam": "IPv4", 00:12:54.664 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:54.664 "trsvcid": "0" 00:12:54.664 } 00:12:54.664 ], 00:12:54.664 "allow_any_host": true, 00:12:54.664 "hosts": [], 00:12:54.664 "serial_number": "SPDK1", 00:12:54.664 "model_number": "SPDK bdev Controller", 00:12:54.664 "max_namespaces": 32, 00:12:54.664 "min_cntlid": 1, 00:12:54.664 "max_cntlid": 65519, 00:12:54.664 "namespaces": [ 00:12:54.664 { 00:12:54.664 "nsid": 1, 00:12:54.664 "bdev_name": "Malloc1", 00:12:54.664 "name": "Malloc1", 00:12:54.664 "nguid": "07D9A539FF234D2C94FF04FF7F2B2437", 00:12:54.664 "uuid": "07d9a539-ff23-4d2c-94ff-04ff7f2b2437" 00:12:54.664 }, 00:12:54.664 { 00:12:54.664 "nsid": 2, 00:12:54.664 "bdev_name": "Malloc3", 00:12:54.664 "name": "Malloc3", 00:12:54.664 "nguid": "FE3543652DAC4D0FB8FA008A85669FA7", 00:12:54.664 "uuid": "fe354365-2dac-4d0f-b8fa-008a85669fa7" 00:12:54.664 } 00:12:54.664 ] 00:12:54.664 }, 00:12:54.664 { 00:12:54.664 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:54.664 "subtype": "NVMe", 00:12:54.664 "listen_addresses": [ 00:12:54.664 { 00:12:54.664 "trtype": "VFIOUSER", 00:12:54.664 "adrfam": "IPv4", 00:12:54.664 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:54.664 "trsvcid": "0" 00:12:54.664 } 00:12:54.664 ], 00:12:54.664 "allow_any_host": true, 00:12:54.664 "hosts": [], 00:12:54.664 "serial_number": "SPDK2", 00:12:54.664 "model_number": "SPDK bdev Controller", 00:12:54.664 "max_namespaces": 32, 00:12:54.664 "min_cntlid": 1, 00:12:54.664 "max_cntlid": 65519, 00:12:54.664 "namespaces": [ 00:12:54.664 { 00:12:54.664 "nsid": 1, 00:12:54.664 "bdev_name": "Malloc2", 00:12:54.664 "name": "Malloc2", 00:12:54.664 "nguid": "1F14A502DA0A41F2920C11B007901159", 00:12:54.664 "uuid": "1f14a502-da0a-41f2-920c-11b007901159" 00:12:54.664 } 00:12:54.664 ] 00:12:54.664 } 00:12:54.664 ] 00:12:54.923 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 207599 00:12:54.923 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:54.923 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:54.923 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:54.923 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:54.923 [2024-12-09 04:03:23.263765] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:12:54.923 [2024-12-09 04:03:23.263803] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid207739 ] 00:12:54.923 [2024-12-09 04:03:23.312100] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:54.923 [2024-12-09 04:03:23.320556] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:54.923 [2024-12-09 04:03:23.320605] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3af1c4f000 00:12:54.923 [2024-12-09 04:03:23.321558] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:54.923 [2024-12-09 04:03:23.322562] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:54.923 [2024-12-09 04:03:23.323568] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:54.923 [2024-12-09 04:03:23.324594] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:54.923 [2024-12-09 04:03:23.325596] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:54.923 [2024-12-09 04:03:23.326607] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:54.923 [2024-12-09 04:03:23.327607] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:54.923 [2024-12-09 04:03:23.328602] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:54.923 [2024-12-09 04:03:23.329618] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:54.924 [2024-12-09 04:03:23.329655] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3af1c44000 00:12:54.924 [2024-12-09 04:03:23.330772] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:54.924 [2024-12-09 04:03:23.345909] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:54.924 [2024-12-09 04:03:23.345946] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:12:54.924 [2024-12-09 04:03:23.351061] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:54.924 [2024-12-09 04:03:23.351114] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:54.924 [2024-12-09 04:03:23.351203] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:12:54.924 [2024-12-09 04:03:23.351225] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:12:54.924 [2024-12-09 04:03:23.351236] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:12:54.924 [2024-12-09 04:03:23.352066] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:54.924 [2024-12-09 04:03:23.352090] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:12:54.924 [2024-12-09 04:03:23.352105] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:12:54.924 [2024-12-09 04:03:23.353069] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:54.924 [2024-12-09 04:03:23.353090] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:12:54.924 [2024-12-09 04:03:23.353104] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:12:54.924 [2024-12-09 04:03:23.354076] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:54.924 [2024-12-09 04:03:23.354096] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:54.924 [2024-12-09 04:03:23.355084] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:54.924 [2024-12-09 04:03:23.355104] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:12:54.924 [2024-12-09 04:03:23.355113] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:12:54.924 [2024-12-09 04:03:23.355125] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:54.924 [2024-12-09 04:03:23.355238] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:12:54.924 [2024-12-09 04:03:23.355247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:54.924 [2024-12-09 04:03:23.355277] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:54.924 [2024-12-09 04:03:23.356095] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:54.924 [2024-12-09 04:03:23.357096] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:54.924 [2024-12-09 04:03:23.358104] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:54.924 [2024-12-09 04:03:23.359102] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:54.924 [2024-12-09 04:03:23.359181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:54.924 [2024-12-09 04:03:23.360116] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:54.924 [2024-12-09 04:03:23.360136] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:54.924 [2024-12-09 04:03:23.360146] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:12:54.924 [2024-12-09 04:03:23.360172] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:12:54.924 [2024-12-09 04:03:23.360185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:12:54.924 [2024-12-09 04:03:23.360208] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:54.924 [2024-12-09 04:03:23.360218] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:54.924 [2024-12-09 04:03:23.360224] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:54.924 [2024-12-09 04:03:23.360240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:54.924 [2024-12-09 04:03:23.368303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:54.924 [2024-12-09 04:03:23.368341] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:12:54.924 [2024-12-09 04:03:23.368351] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:12:54.924 [2024-12-09 04:03:23.368359] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:12:54.924 [2024-12-09 04:03:23.368367] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:54.924 [2024-12-09 04:03:23.368374] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:12:54.924 [2024-12-09 04:03:23.368382] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:12:54.924 [2024-12-09 04:03:23.368390] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:12:54.924 [2024-12-09 04:03:23.368407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:12:54.924 [2024-12-09 04:03:23.368423] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:54.924 [2024-12-09 04:03:23.376300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:54.924 [2024-12-09 04:03:23.376324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.924 [2024-12-09 04:03:23.376338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.924 [2024-12-09 04:03:23.376351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.924 [2024-12-09 04:03:23.376363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.924 [2024-12-09 04:03:23.376372] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:12:54.924 [2024-12-09 04:03:23.376389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:54.924 [2024-12-09 04:03:23.376404] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:54.924 [2024-12-09 04:03:23.384384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:54.924 [2024-12-09 04:03:23.384403] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:12:54.924 [2024-12-09 04:03:23.384412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:54.924 [2024-12-09 04:03:23.384424] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:12:54.924 [2024-12-09 04:03:23.384434] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:12:54.924 [2024-12-09 04:03:23.384449] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:54.924 [2024-12-09 04:03:23.392297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:54.924 [2024-12-09 04:03:23.392374] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:12:54.924 [2024-12-09 04:03:23.392392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:12:54.924 [2024-12-09 04:03:23.392406] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:54.924 [2024-12-09 04:03:23.392415] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:54.924 [2024-12-09 04:03:23.392422] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:54.924 [2024-12-09 04:03:23.392432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:54.924 [2024-12-09 04:03:23.400282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:54.924 [2024-12-09 04:03:23.400305] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:12:54.924 [2024-12-09 04:03:23.400334] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:12:54.924 [2024-12-09 04:03:23.400350] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:12:54.924 [2024-12-09 04:03:23.400364] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:54.924 [2024-12-09 04:03:23.400373] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:54.924 [2024-12-09 04:03:23.400379] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:54.924 [2024-12-09 04:03:23.400389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:54.924 [2024-12-09 04:03:23.408282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:54.925 [2024-12-09 04:03:23.408320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:54.925 [2024-12-09 04:03:23.408338] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:54.925 [2024-12-09 04:03:23.408353] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:54.925 [2024-12-09 04:03:23.408362] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:54.925 [2024-12-09 04:03:23.408368] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:54.925 [2024-12-09 04:03:23.408378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:54.925 [2024-12-09 04:03:23.416294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:54.925 [2024-12-09 04:03:23.416316] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:54.925 [2024-12-09 04:03:23.416330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:12:54.925 [2024-12-09 04:03:23.416344] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:12:54.925 [2024-12-09 04:03:23.416358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:12:54.925 [2024-12-09 04:03:23.416367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:54.925 [2024-12-09 04:03:23.416376] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:12:54.925 [2024-12-09 04:03:23.416384] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:12:54.925 [2024-12-09 04:03:23.416392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:12:54.925 [2024-12-09 04:03:23.416401] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:12:54.925 [2024-12-09 04:03:23.416425] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:54.925 [2024-12-09 04:03:23.424284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:54.925 [2024-12-09 04:03:23.424316] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:54.925 [2024-12-09 04:03:23.432299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:54.925 [2024-12-09 04:03:23.432325] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:54.925 [2024-12-09 04:03:23.440285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:54.925 [2024-12-09 04:03:23.440309] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:54.925 [2024-12-09 04:03:23.447319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:54.925 [2024-12-09 04:03:23.447352] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:54.925 [2024-12-09 04:03:23.447364] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:54.925 [2024-12-09 04:03:23.447371] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:54.925 [2024-12-09 04:03:23.447377] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:54.925 [2024-12-09 04:03:23.447383] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:54.925 [2024-12-09 04:03:23.447393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:54.925 [2024-12-09 04:03:23.447407] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:54.925 [2024-12-09 04:03:23.447416] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:54.925 [2024-12-09 04:03:23.447422] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:54.925 [2024-12-09 04:03:23.447431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:54.925 [2024-12-09 04:03:23.447443] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:54.925 [2024-12-09 04:03:23.447452] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:54.925 [2024-12-09 04:03:23.447458] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:54.925 [2024-12-09 04:03:23.447466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:54.925 [2024-12-09 04:03:23.447479] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:54.925 [2024-12-09 04:03:23.447488] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:54.925 [2024-12-09 04:03:23.447494] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:54.925 [2024-12-09 04:03:23.447503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:54.925 [2024-12-09 04:03:23.456300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:54.925 [2024-12-09 04:03:23.456328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:54.925 [2024-12-09 04:03:23.456347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:54.925 [2024-12-09 04:03:23.456360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:54.925 ===================================================== 00:12:54.925 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:54.925 ===================================================== 00:12:54.925 Controller Capabilities/Features 00:12:54.925 ================================ 00:12:54.925 Vendor ID: 4e58 00:12:54.925 Subsystem Vendor ID: 4e58 00:12:54.925 Serial Number: SPDK2 00:12:54.925 Model Number: SPDK bdev Controller 00:12:54.925 Firmware Version: 25.01 00:12:54.925 Recommended Arb Burst: 6 00:12:54.925 IEEE OUI Identifier: 8d 6b 50 00:12:54.925 Multi-path I/O 00:12:54.925 May have multiple subsystem ports: Yes 00:12:54.925 May have multiple controllers: Yes 00:12:54.925 Associated with SR-IOV VF: No 00:12:54.925 Max Data Transfer Size: 131072 00:12:54.925 Max Number of Namespaces: 32 00:12:54.925 Max Number of I/O Queues: 127 00:12:54.925 NVMe Specification Version (VS): 1.3 00:12:54.925 NVMe Specification Version (Identify): 1.3 00:12:54.925 Maximum Queue Entries: 256 00:12:54.925 Contiguous Queues Required: Yes 00:12:54.925 Arbitration Mechanisms Supported 00:12:54.925 Weighted Round Robin: Not Supported 00:12:54.925 Vendor Specific: Not Supported 00:12:54.925 Reset Timeout: 15000 ms 00:12:54.925 Doorbell Stride: 4 bytes 00:12:54.925 NVM Subsystem Reset: Not Supported 00:12:54.925 Command Sets Supported 00:12:54.925 NVM Command Set: Supported 00:12:54.925 Boot Partition: Not Supported 00:12:54.925 Memory Page Size Minimum: 4096 bytes 00:12:54.925 Memory Page Size Maximum: 4096 bytes 00:12:54.925 Persistent Memory Region: Not Supported 00:12:54.925 Optional Asynchronous Events Supported 00:12:54.925 Namespace Attribute Notices: Supported 00:12:54.925 Firmware Activation Notices: Not Supported 00:12:54.925 ANA Change Notices: Not Supported 00:12:54.925 PLE Aggregate Log Change Notices: Not Supported 00:12:54.925 LBA Status Info Alert Notices: Not Supported 00:12:54.925 EGE Aggregate Log Change Notices: Not Supported 00:12:54.925 Normal NVM Subsystem Shutdown event: Not Supported 00:12:54.925 Zone Descriptor Change Notices: Not Supported 00:12:54.925 Discovery Log Change Notices: Not Supported 00:12:54.925 Controller Attributes 00:12:54.925 128-bit Host Identifier: Supported 00:12:54.925 Non-Operational Permissive Mode: Not Supported 00:12:54.925 NVM Sets: Not Supported 00:12:54.925 Read Recovery Levels: Not Supported 00:12:54.925 Endurance Groups: Not Supported 00:12:54.925 Predictable Latency Mode: Not Supported 00:12:54.925 Traffic Based Keep ALive: Not Supported 00:12:54.925 Namespace Granularity: Not Supported 00:12:54.925 SQ Associations: Not Supported 00:12:54.925 UUID List: Not Supported 00:12:54.925 Multi-Domain Subsystem: Not Supported 00:12:54.925 Fixed Capacity Management: Not Supported 00:12:54.925 Variable Capacity Management: Not Supported 00:12:54.925 Delete Endurance Group: Not Supported 00:12:54.925 Delete NVM Set: Not Supported 00:12:54.925 Extended LBA Formats Supported: Not Supported 00:12:54.925 Flexible Data Placement Supported: Not Supported 00:12:54.925 00:12:54.925 Controller Memory Buffer Support 00:12:54.925 ================================ 00:12:54.925 Supported: No 00:12:54.925 00:12:54.925 Persistent Memory Region Support 00:12:54.925 ================================ 00:12:54.925 Supported: No 00:12:54.925 00:12:54.925 Admin Command Set Attributes 00:12:54.925 ============================ 00:12:54.925 Security Send/Receive: Not Supported 00:12:54.925 Format NVM: Not Supported 00:12:54.925 Firmware Activate/Download: Not Supported 00:12:54.925 Namespace Management: Not Supported 00:12:54.925 Device Self-Test: Not Supported 00:12:54.925 Directives: Not Supported 00:12:54.925 NVMe-MI: Not Supported 00:12:54.925 Virtualization Management: Not Supported 00:12:54.925 Doorbell Buffer Config: Not Supported 00:12:54.925 Get LBA Status Capability: Not Supported 00:12:54.925 Command & Feature Lockdown Capability: Not Supported 00:12:54.926 Abort Command Limit: 4 00:12:54.926 Async Event Request Limit: 4 00:12:54.926 Number of Firmware Slots: N/A 00:12:54.926 Firmware Slot 1 Read-Only: N/A 00:12:54.926 Firmware Activation Without Reset: N/A 00:12:54.926 Multiple Update Detection Support: N/A 00:12:54.926 Firmware Update Granularity: No Information Provided 00:12:54.926 Per-Namespace SMART Log: No 00:12:54.926 Asymmetric Namespace Access Log Page: Not Supported 00:12:54.926 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:54.926 Command Effects Log Page: Supported 00:12:54.926 Get Log Page Extended Data: Supported 00:12:54.926 Telemetry Log Pages: Not Supported 00:12:54.926 Persistent Event Log Pages: Not Supported 00:12:54.926 Supported Log Pages Log Page: May Support 00:12:54.926 Commands Supported & Effects Log Page: Not Supported 00:12:54.926 Feature Identifiers & Effects Log Page:May Support 00:12:54.926 NVMe-MI Commands & Effects Log Page: May Support 00:12:54.926 Data Area 4 for Telemetry Log: Not Supported 00:12:54.926 Error Log Page Entries Supported: 128 00:12:54.926 Keep Alive: Supported 00:12:54.926 Keep Alive Granularity: 10000 ms 00:12:54.926 00:12:54.926 NVM Command Set Attributes 00:12:54.926 ========================== 00:12:54.926 Submission Queue Entry Size 00:12:54.926 Max: 64 00:12:54.926 Min: 64 00:12:54.926 Completion Queue Entry Size 00:12:54.926 Max: 16 00:12:54.926 Min: 16 00:12:54.926 Number of Namespaces: 32 00:12:54.926 Compare Command: Supported 00:12:54.926 Write Uncorrectable Command: Not Supported 00:12:54.926 Dataset Management Command: Supported 00:12:54.926 Write Zeroes Command: Supported 00:12:54.926 Set Features Save Field: Not Supported 00:12:54.926 Reservations: Not Supported 00:12:54.926 Timestamp: Not Supported 00:12:54.926 Copy: Supported 00:12:54.926 Volatile Write Cache: Present 00:12:54.926 Atomic Write Unit (Normal): 1 00:12:54.926 Atomic Write Unit (PFail): 1 00:12:54.926 Atomic Compare & Write Unit: 1 00:12:54.926 Fused Compare & Write: Supported 00:12:54.926 Scatter-Gather List 00:12:54.926 SGL Command Set: Supported (Dword aligned) 00:12:54.926 SGL Keyed: Not Supported 00:12:54.926 SGL Bit Bucket Descriptor: Not Supported 00:12:54.926 SGL Metadata Pointer: Not Supported 00:12:54.926 Oversized SGL: Not Supported 00:12:54.926 SGL Metadata Address: Not Supported 00:12:54.926 SGL Offset: Not Supported 00:12:54.926 Transport SGL Data Block: Not Supported 00:12:54.926 Replay Protected Memory Block: Not Supported 00:12:54.926 00:12:54.926 Firmware Slot Information 00:12:54.926 ========================= 00:12:54.926 Active slot: 1 00:12:54.926 Slot 1 Firmware Revision: 25.01 00:12:54.926 00:12:54.926 00:12:54.926 Commands Supported and Effects 00:12:54.926 ============================== 00:12:54.926 Admin Commands 00:12:54.926 -------------- 00:12:54.926 Get Log Page (02h): Supported 00:12:54.926 Identify (06h): Supported 00:12:54.926 Abort (08h): Supported 00:12:54.926 Set Features (09h): Supported 00:12:54.926 Get Features (0Ah): Supported 00:12:54.926 Asynchronous Event Request (0Ch): Supported 00:12:54.926 Keep Alive (18h): Supported 00:12:54.926 I/O Commands 00:12:54.926 ------------ 00:12:54.926 Flush (00h): Supported LBA-Change 00:12:54.926 Write (01h): Supported LBA-Change 00:12:54.926 Read (02h): Supported 00:12:54.926 Compare (05h): Supported 00:12:54.926 Write Zeroes (08h): Supported LBA-Change 00:12:54.926 Dataset Management (09h): Supported LBA-Change 00:12:54.926 Copy (19h): Supported LBA-Change 00:12:54.926 00:12:54.926 Error Log 00:12:54.926 ========= 00:12:54.926 00:12:54.926 Arbitration 00:12:54.926 =========== 00:12:54.926 Arbitration Burst: 1 00:12:54.926 00:12:54.926 Power Management 00:12:54.926 ================ 00:12:54.926 Number of Power States: 1 00:12:54.926 Current Power State: Power State #0 00:12:54.926 Power State #0: 00:12:54.926 Max Power: 0.00 W 00:12:54.926 Non-Operational State: Operational 00:12:54.926 Entry Latency: Not Reported 00:12:54.926 Exit Latency: Not Reported 00:12:54.926 Relative Read Throughput: 0 00:12:54.926 Relative Read Latency: 0 00:12:54.926 Relative Write Throughput: 0 00:12:54.926 Relative Write Latency: 0 00:12:54.926 Idle Power: Not Reported 00:12:54.926 Active Power: Not Reported 00:12:54.926 Non-Operational Permissive Mode: Not Supported 00:12:54.926 00:12:54.926 Health Information 00:12:54.926 ================== 00:12:54.926 Critical Warnings: 00:12:54.926 Available Spare Space: OK 00:12:54.926 Temperature: OK 00:12:54.926 Device Reliability: OK 00:12:54.926 Read Only: No 00:12:54.926 Volatile Memory Backup: OK 00:12:54.926 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:54.926 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:54.926 Available Spare: 0% 00:12:54.926 Available Sp[2024-12-09 04:03:23.456478] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:54.926 [2024-12-09 04:03:23.464299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:54.926 [2024-12-09 04:03:23.464357] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:12:54.926 [2024-12-09 04:03:23.464375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.926 [2024-12-09 04:03:23.464387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.926 [2024-12-09 04:03:23.464397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.926 [2024-12-09 04:03:23.464407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.926 [2024-12-09 04:03:23.464493] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:54.926 [2024-12-09 04:03:23.464514] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:54.926 [2024-12-09 04:03:23.465494] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:54.926 [2024-12-09 04:03:23.465606] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:12:54.926 [2024-12-09 04:03:23.465621] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:12:54.926 [2024-12-09 04:03:23.466502] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:54.926 [2024-12-09 04:03:23.466526] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:12:54.926 [2024-12-09 04:03:23.466587] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:54.926 [2024-12-09 04:03:23.467810] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:55.184 are Threshold: 0% 00:12:55.184 Life Percentage Used: 0% 00:12:55.184 Data Units Read: 0 00:12:55.184 Data Units Written: 0 00:12:55.184 Host Read Commands: 0 00:12:55.184 Host Write Commands: 0 00:12:55.184 Controller Busy Time: 0 minutes 00:12:55.184 Power Cycles: 0 00:12:55.184 Power On Hours: 0 hours 00:12:55.184 Unsafe Shutdowns: 0 00:12:55.184 Unrecoverable Media Errors: 0 00:12:55.184 Lifetime Error Log Entries: 0 00:12:55.184 Warning Temperature Time: 0 minutes 00:12:55.184 Critical Temperature Time: 0 minutes 00:12:55.184 00:12:55.184 Number of Queues 00:12:55.184 ================ 00:12:55.184 Number of I/O Submission Queues: 127 00:12:55.184 Number of I/O Completion Queues: 127 00:12:55.184 00:12:55.184 Active Namespaces 00:12:55.184 ================= 00:12:55.184 Namespace ID:1 00:12:55.184 Error Recovery Timeout: Unlimited 00:12:55.184 Command Set Identifier: NVM (00h) 00:12:55.184 Deallocate: Supported 00:12:55.184 Deallocated/Unwritten Error: Not Supported 00:12:55.184 Deallocated Read Value: Unknown 00:12:55.184 Deallocate in Write Zeroes: Not Supported 00:12:55.184 Deallocated Guard Field: 0xFFFF 00:12:55.184 Flush: Supported 00:12:55.184 Reservation: Supported 00:12:55.184 Namespace Sharing Capabilities: Multiple Controllers 00:12:55.184 Size (in LBAs): 131072 (0GiB) 00:12:55.184 Capacity (in LBAs): 131072 (0GiB) 00:12:55.184 Utilization (in LBAs): 131072 (0GiB) 00:12:55.184 NGUID: 1F14A502DA0A41F2920C11B007901159 00:12:55.184 UUID: 1f14a502-da0a-41f2-920c-11b007901159 00:12:55.184 Thin Provisioning: Not Supported 00:12:55.184 Per-NS Atomic Units: Yes 00:12:55.184 Atomic Boundary Size (Normal): 0 00:12:55.184 Atomic Boundary Size (PFail): 0 00:12:55.184 Atomic Boundary Offset: 0 00:12:55.184 Maximum Single Source Range Length: 65535 00:12:55.184 Maximum Copy Length: 65535 00:12:55.184 Maximum Source Range Count: 1 00:12:55.184 NGUID/EUI64 Never Reused: No 00:12:55.184 Namespace Write Protected: No 00:12:55.184 Number of LBA Formats: 1 00:12:55.184 Current LBA Format: LBA Format #00 00:12:55.184 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:55.184 00:12:55.185 04:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:55.185 [2024-12-09 04:03:23.713166] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:00.446 Initializing NVMe Controllers 00:13:00.446 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:00.446 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:00.446 Initialization complete. Launching workers. 00:13:00.446 ======================================================== 00:13:00.446 Latency(us) 00:13:00.446 Device Information : IOPS MiB/s Average min max 00:13:00.446 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31787.39 124.17 4028.37 1187.63 10330.95 00:13:00.446 ======================================================== 00:13:00.446 Total : 31787.39 124.17 4028.37 1187.63 10330.95 00:13:00.446 00:13:00.446 [2024-12-09 04:03:28.820645] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:00.446 04:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:00.703 [2024-12-09 04:03:29.074364] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:05.974 Initializing NVMe Controllers 00:13:05.974 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:05.974 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:05.974 Initialization complete. Launching workers. 00:13:05.974 ======================================================== 00:13:05.974 Latency(us) 00:13:05.974 Device Information : IOPS MiB/s Average min max 00:13:05.974 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30242.95 118.14 4231.83 1226.34 7621.55 00:13:05.974 ======================================================== 00:13:05.974 Total : 30242.95 118.14 4231.83 1226.34 7621.55 00:13:05.975 00:13:05.975 [2024-12-09 04:03:34.092914] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:05.975 04:03:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:05.975 [2024-12-09 04:03:34.328189] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:11.233 [2024-12-09 04:03:39.455450] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:11.233 Initializing NVMe Controllers 00:13:11.233 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:11.233 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:11.233 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:11.233 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:11.233 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:11.233 Initialization complete. Launching workers. 00:13:11.233 Starting thread on core 2 00:13:11.233 Starting thread on core 3 00:13:11.233 Starting thread on core 1 00:13:11.233 04:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:11.233 [2024-12-09 04:03:39.781772] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:14.512 [2024-12-09 04:03:42.952580] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:14.512 Initializing NVMe Controllers 00:13:14.512 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:14.512 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:14.512 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:14.512 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:14.512 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:14.512 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:14.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:14.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:14.512 Initialization complete. Launching workers. 00:13:14.512 Starting thread on core 1 with urgent priority queue 00:13:14.512 Starting thread on core 2 with urgent priority queue 00:13:14.512 Starting thread on core 3 with urgent priority queue 00:13:14.512 Starting thread on core 0 with urgent priority queue 00:13:14.512 SPDK bdev Controller (SPDK2 ) core 0: 2197.33 IO/s 45.51 secs/100000 ios 00:13:14.512 SPDK bdev Controller (SPDK2 ) core 1: 3359.33 IO/s 29.77 secs/100000 ios 00:13:14.512 SPDK bdev Controller (SPDK2 ) core 2: 2726.00 IO/s 36.68 secs/100000 ios 00:13:14.512 SPDK bdev Controller (SPDK2 ) core 3: 3641.33 IO/s 27.46 secs/100000 ios 00:13:14.512 ======================================================== 00:13:14.512 00:13:14.512 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:14.770 [2024-12-09 04:03:43.266806] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:14.770 Initializing NVMe Controllers 00:13:14.770 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:14.770 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:14.770 Namespace ID: 1 size: 0GB 00:13:14.770 Initialization complete. 00:13:14.770 INFO: using host memory buffer for IO 00:13:14.770 Hello world! 00:13:14.770 [2024-12-09 04:03:43.275863] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:14.770 04:03:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:15.028 [2024-12-09 04:03:43.585022] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:16.401 Initializing NVMe Controllers 00:13:16.401 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:16.401 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:16.401 Initialization complete. Launching workers. 00:13:16.401 submit (in ns) avg, min, max = 9011.1, 3521.1, 4019736.7 00:13:16.401 complete (in ns) avg, min, max = 28038.1, 2060.0, 5011870.0 00:13:16.401 00:13:16.401 Submit histogram 00:13:16.401 ================ 00:13:16.401 Range in us Cumulative Count 00:13:16.401 3.508 - 3.532: 0.0237% ( 3) 00:13:16.401 3.532 - 3.556: 0.4499% ( 54) 00:13:16.401 3.556 - 3.579: 1.3260% ( 111) 00:13:16.401 3.579 - 3.603: 3.9621% ( 334) 00:13:16.401 3.603 - 3.627: 7.8927% ( 498) 00:13:16.401 3.627 - 3.650: 16.4562% ( 1085) 00:13:16.401 3.650 - 3.674: 25.5249% ( 1149) 00:13:16.401 3.674 - 3.698: 34.7593% ( 1170) 00:13:16.401 3.698 - 3.721: 42.2968% ( 955) 00:13:16.401 3.721 - 3.745: 49.8106% ( 952) 00:13:16.401 3.745 - 3.769: 55.9669% ( 780) 00:13:16.401 3.769 - 3.793: 61.7206% ( 729) 00:13:16.401 3.793 - 3.816: 65.7695% ( 513) 00:13:16.401 3.816 - 3.840: 69.0371% ( 414) 00:13:16.401 3.840 - 3.864: 72.7388% ( 469) 00:13:16.401 3.864 - 3.887: 76.2747% ( 448) 00:13:16.401 3.887 - 3.911: 80.3788% ( 520) 00:13:16.401 3.911 - 3.935: 83.6543% ( 415) 00:13:16.401 3.935 - 3.959: 86.0379% ( 302) 00:13:16.401 3.959 - 3.982: 88.2873% ( 285) 00:13:16.401 3.982 - 4.006: 90.1026% ( 230) 00:13:16.401 4.006 - 4.030: 91.5470% ( 183) 00:13:16.401 4.030 - 4.053: 93.0071% ( 185) 00:13:16.401 4.053 - 4.077: 94.1673% ( 147) 00:13:16.401 4.077 - 4.101: 94.9487% ( 99) 00:13:16.401 4.101 - 4.124: 95.5801% ( 80) 00:13:16.401 4.124 - 4.148: 95.9984% ( 53) 00:13:16.401 4.148 - 4.172: 96.2431% ( 31) 00:13:16.401 4.172 - 4.196: 96.4483% ( 26) 00:13:16.401 4.196 - 4.219: 96.6062% ( 20) 00:13:16.401 4.219 - 4.243: 96.7324% ( 16) 00:13:16.401 4.243 - 4.267: 96.8508% ( 15) 00:13:16.401 4.267 - 4.290: 96.9061% ( 7) 00:13:16.401 4.290 - 4.314: 97.0166% ( 14) 00:13:16.401 4.314 - 4.338: 97.1034% ( 11) 00:13:16.401 4.338 - 4.361: 97.1823% ( 10) 00:13:16.401 4.361 - 4.385: 97.2455% ( 8) 00:13:16.401 4.385 - 4.409: 97.3323% ( 11) 00:13:16.401 4.409 - 4.433: 97.3402% ( 1) 00:13:16.401 4.433 - 4.456: 97.3560% ( 2) 00:13:16.401 4.456 - 4.480: 97.3717% ( 2) 00:13:16.401 4.480 - 4.504: 97.3954% ( 3) 00:13:16.401 4.504 - 4.527: 97.4112% ( 2) 00:13:16.401 4.527 - 4.551: 97.4270% ( 2) 00:13:16.401 4.551 - 4.575: 97.4349% ( 1) 00:13:16.401 4.575 - 4.599: 97.4507% ( 2) 00:13:16.401 4.599 - 4.622: 97.4586% ( 1) 00:13:16.401 4.622 - 4.646: 97.4665% ( 1) 00:13:16.401 4.646 - 4.670: 97.4743% ( 1) 00:13:16.401 4.717 - 4.741: 97.4822% ( 1) 00:13:16.401 4.741 - 4.764: 97.4980% ( 2) 00:13:16.401 4.764 - 4.788: 97.5217% ( 3) 00:13:16.401 4.788 - 4.812: 97.5533% ( 4) 00:13:16.401 4.812 - 4.836: 97.5848% ( 4) 00:13:16.401 4.836 - 4.859: 97.6085% ( 3) 00:13:16.401 4.859 - 4.883: 97.6401% ( 4) 00:13:16.401 4.883 - 4.907: 97.6875% ( 6) 00:13:16.401 4.907 - 4.930: 97.7585% ( 9) 00:13:16.401 4.930 - 4.954: 97.8216% ( 8) 00:13:16.401 4.954 - 4.978: 97.8611% ( 5) 00:13:16.401 4.978 - 5.001: 97.9163% ( 7) 00:13:16.401 5.001 - 5.025: 97.9400% ( 3) 00:13:16.401 5.025 - 5.049: 97.9874% ( 6) 00:13:16.401 5.049 - 5.073: 98.0189% ( 4) 00:13:16.401 5.073 - 5.096: 98.0505% ( 4) 00:13:16.401 5.096 - 5.120: 98.0663% ( 2) 00:13:16.401 5.120 - 5.144: 98.0900% ( 3) 00:13:16.401 5.144 - 5.167: 98.1294% ( 5) 00:13:16.401 5.167 - 5.191: 98.1452% ( 2) 00:13:16.401 5.191 - 5.215: 98.1768% ( 4) 00:13:16.401 5.215 - 5.239: 98.1847% ( 1) 00:13:16.401 5.239 - 5.262: 98.1926% ( 1) 00:13:16.401 5.262 - 5.286: 98.2242% ( 4) 00:13:16.401 5.286 - 5.310: 98.2320% ( 1) 00:13:16.401 5.333 - 5.357: 98.2478% ( 2) 00:13:16.401 5.357 - 5.381: 98.2557% ( 1) 00:13:16.401 5.404 - 5.428: 98.2636% ( 1) 00:13:16.401 5.428 - 5.452: 98.2873% ( 3) 00:13:16.401 5.452 - 5.476: 98.2952% ( 1) 00:13:16.401 5.476 - 5.499: 98.3031% ( 1) 00:13:16.401 5.499 - 5.523: 98.3110% ( 1) 00:13:16.401 5.523 - 5.547: 98.3189% ( 1) 00:13:16.401 5.594 - 5.618: 98.3268% ( 1) 00:13:16.401 5.784 - 5.807: 98.3346% ( 1) 00:13:16.401 6.068 - 6.116: 98.3504% ( 2) 00:13:16.401 6.116 - 6.163: 98.3583% ( 1) 00:13:16.401 6.163 - 6.210: 98.3662% ( 1) 00:13:16.401 6.258 - 6.305: 98.3741% ( 1) 00:13:16.401 6.637 - 6.684: 98.3820% ( 1) 00:13:16.401 6.732 - 6.779: 98.3899% ( 1) 00:13:16.401 6.779 - 6.827: 98.4057% ( 2) 00:13:16.401 6.921 - 6.969: 98.4136% ( 1) 00:13:16.401 7.016 - 7.064: 98.4215% ( 1) 00:13:16.401 7.064 - 7.111: 98.4294% ( 1) 00:13:16.401 7.111 - 7.159: 98.4373% ( 1) 00:13:16.401 7.159 - 7.206: 98.4451% ( 1) 00:13:16.401 7.301 - 7.348: 98.4609% ( 2) 00:13:16.401 7.348 - 7.396: 98.4688% ( 1) 00:13:16.401 7.396 - 7.443: 98.4846% ( 2) 00:13:16.401 7.443 - 7.490: 98.4925% ( 1) 00:13:16.401 7.490 - 7.538: 98.5083% ( 2) 00:13:16.401 7.538 - 7.585: 98.5162% ( 1) 00:13:16.401 7.585 - 7.633: 98.5241% ( 1) 00:13:16.401 7.633 - 7.680: 98.5320% ( 1) 00:13:16.401 7.775 - 7.822: 98.5399% ( 1) 00:13:16.401 7.870 - 7.917: 98.5478% ( 1) 00:13:16.401 7.917 - 7.964: 98.5635% ( 2) 00:13:16.401 7.964 - 8.012: 98.5714% ( 1) 00:13:16.401 8.012 - 8.059: 98.5793% ( 1) 00:13:16.401 8.059 - 8.107: 98.5951% ( 2) 00:13:16.401 8.107 - 8.154: 98.6030% ( 1) 00:13:16.401 8.154 - 8.201: 98.6109% ( 1) 00:13:16.401 8.201 - 8.249: 98.6267% ( 2) 00:13:16.401 8.249 - 8.296: 98.6504% ( 3) 00:13:16.401 8.344 - 8.391: 98.6582% ( 1) 00:13:16.401 8.439 - 8.486: 98.6661% ( 1) 00:13:16.401 8.486 - 8.533: 98.6740% ( 1) 00:13:16.401 8.723 - 8.770: 98.6898% ( 2) 00:13:16.401 8.770 - 8.818: 98.6977% ( 1) 00:13:16.401 8.913 - 8.960: 98.7056% ( 1) 00:13:16.401 9.007 - 9.055: 98.7214% ( 2) 00:13:16.401 9.102 - 9.150: 98.7372% ( 2) 00:13:16.401 9.197 - 9.244: 98.7451% ( 1) 00:13:16.401 9.244 - 9.292: 98.7530% ( 1) 00:13:16.401 9.529 - 9.576: 98.7687% ( 2) 00:13:16.401 9.813 - 9.861: 98.7766% ( 1) 00:13:16.401 9.908 - 9.956: 98.7924% ( 2) 00:13:16.401 10.193 - 10.240: 98.8003% ( 1) 00:13:16.401 10.335 - 10.382: 98.8082% ( 1) 00:13:16.401 10.572 - 10.619: 98.8240% ( 2) 00:13:16.401 10.667 - 10.714: 98.8319% ( 1) 00:13:16.401 11.046 - 11.093: 98.8398% ( 1) 00:13:16.401 11.236 - 11.283: 98.8477% ( 1) 00:13:16.401 11.473 - 11.520: 98.8556% ( 1) 00:13:16.401 11.852 - 11.899: 98.8635% ( 1) 00:13:16.401 12.326 - 12.421: 98.8713% ( 1) 00:13:16.401 12.610 - 12.705: 98.8792% ( 1) 00:13:16.401 13.084 - 13.179: 98.8871% ( 1) 00:13:16.401 13.369 - 13.464: 98.8950% ( 1) 00:13:16.401 13.559 - 13.653: 98.9029% ( 1) 00:13:16.401 13.653 - 13.748: 98.9108% ( 1) 00:13:16.401 13.843 - 13.938: 98.9187% ( 1) 00:13:16.401 13.938 - 14.033: 98.9266% ( 1) 00:13:16.401 14.127 - 14.222: 98.9345% ( 1) 00:13:16.401 14.317 - 14.412: 98.9424% ( 1) 00:13:16.401 17.067 - 17.161: 98.9503% ( 1) 00:13:16.401 17.351 - 17.446: 98.9661% ( 2) 00:13:16.401 17.446 - 17.541: 99.0134% ( 6) 00:13:16.401 17.541 - 17.636: 99.0371% ( 3) 00:13:16.401 17.636 - 17.730: 99.0608% ( 3) 00:13:16.401 17.730 - 17.825: 99.0766% ( 2) 00:13:16.401 17.825 - 17.920: 99.1318% ( 7) 00:13:16.401 17.920 - 18.015: 99.1949% ( 8) 00:13:16.401 18.015 - 18.110: 99.2660% ( 9) 00:13:16.401 18.110 - 18.204: 99.3133% ( 6) 00:13:16.401 18.204 - 18.299: 99.3923% ( 10) 00:13:16.401 18.299 - 18.394: 99.4791% ( 11) 00:13:16.401 18.394 - 18.489: 99.5343% ( 7) 00:13:16.401 18.489 - 18.584: 99.5738% ( 5) 00:13:16.402 18.584 - 18.679: 99.6448% ( 9) 00:13:16.402 18.679 - 18.773: 99.6764% ( 4) 00:13:16.402 18.773 - 18.868: 99.7080% ( 4) 00:13:16.402 18.868 - 18.963: 99.7474% ( 5) 00:13:16.402 18.963 - 19.058: 99.7711% ( 3) 00:13:16.402 19.058 - 19.153: 99.7948% ( 3) 00:13:16.402 19.342 - 19.437: 99.8106% ( 2) 00:13:16.402 19.437 - 19.532: 99.8185% ( 1) 00:13:16.402 19.627 - 19.721: 99.8264% ( 1) 00:13:16.402 20.006 - 20.101: 99.8343% ( 1) 00:13:16.402 21.902 - 21.997: 99.8421% ( 1) 00:13:16.402 22.661 - 22.756: 99.8500% ( 1) 00:13:16.402 23.514 - 23.609: 99.8579% ( 1) 00:13:16.402 24.462 - 24.652: 99.8658% ( 1) 00:13:16.402 24.841 - 25.031: 99.8737% ( 1) 00:13:16.402 3980.705 - 4004.978: 99.9684% ( 12) 00:13:16.402 4004.978 - 4029.250: 100.0000% ( 4) 00:13:16.402 00:13:16.402 Complete histogram 00:13:16.402 ================== 00:13:16.402 Range in us Cumulative Count 00:13:16.402 2.050 - 2.062: 0.0868% ( 11) 00:13:16.402 2.062 - 2.074: 16.2983% ( 2054) 00:13:16.402 2.074 - 2.086: 38.8398% ( 2856) 00:13:16.402 2.086 - 2.098: 40.6314% ( 227) 00:13:16.402 2.098 - 2.110: 54.1121% ( 1708) 00:13:16.402 2.110 - 2.121: 61.1839% ( 896) 00:13:16.402 2.121 - 2.133: 63.3860% ( 279) 00:13:16.402 2.133 - 2.145: 74.5620% ( 1416) 00:13:16.402 2.145 - 2.157: 79.7001% ( 651) 00:13:16.402 2.157 - 2.169: 81.4286% ( 219) 00:13:16.402 2.169 - 2.181: 86.4483% ( 636) 00:13:16.402 2.181 - 2.193: 87.9874% ( 195) 00:13:16.402 2.193 - 2.204: 88.8556% ( 110) 00:13:16.402 2.204 - 2.216: 90.7498% ( 240) 00:13:16.402 2.216 - 2.228: 93.5359% ( 353) 00:13:16.402 2.228 - 2.240: 94.1910% ( 83) 00:13:16.402 2.240 - 2.252: 94.7672% ( 73) 00:13:16.402 2.252 - 2.264: 94.9803% ( 27) 00:13:16.402 2.264 - 2.276: 95.1066% ( 16) 00:13:16.402 2.276 - 2.287: 95.3907% ( 36) 00:13:16.402 2.287 - 2.299: 95.8406% ( 57) 00:13:16.402 2.299 - 2.311: 95.9511% ( 14) 00:13:16.402 2.311 - 2.323: 96.0221% ( 9) 00:13:16.402 2.323 - 2.335: 96.0773% ( 7) 00:13:16.402 2.335 - 2.347: 96.1010% ( 3) 00:13:16.402 2.347 - 2.359: 96.2273% ( 16) 00:13:16.402 2.359 - 2.370: 96.5272% ( 38) 00:13:16.402 2.370 - 2.382: 96.9850% ( 58) 00:13:16.402 2.382 - 2.394: 97.3244% ( 43) 00:13:16.402 2.394 - 2.406: 97.6243% ( 38) 00:13:16.402 2.406 - 2.418: 97.7979% ( 22) 00:13:16.402 2.418 - 2.430: 97.9558% ( 20) 00:13:16.402 2.430 - 2.441: 98.0663% ( 14) 00:13:16.402 2.441 - 2.453: 98.1531% ( 11) 00:13:16.402 2.453 - 2.465: 98.1847% ( 4) 00:13:16.402 2.465 - 2.477: 98.2399% ( 7) 00:13:16.402 2.477 - 2.489: 98.2873% ( 6) 00:13:16.402 2.489 - 2.501: 98.3189% ( 4) 00:13:16.402 2.501 - 2.513: 98.3425% ( 3) 00:13:16.402 2.513 - 2.524: 98.3583% ( 2) 00:13:16.402 2.524 - 2.536: 98.3820% ( 3) 00:13:16.402 2.560 - 2.572: 98.3899% ( 1) 00:13:16.402 2.607 - 2.619: 98.3978% ( 1) 00:13:16.402 2.619 - 2.631: 98.4057% ( 1) 00:13:16.402 2.655 - 2.667: 98.4215% ( 2) 00:13:16.402 2.690 - 2.702: 98.4373% ( 2) 00:13:16.402 2.714 - 2.726: 98.4530% ( 2) 00:13:16.402 2.761 - 2.773: 98.4609% ( 1) 00:13:16.402 2.892 - 2.904: 98.4688% ( 1) 00:13:16.402 3.034 - 3.058: 98.4767% ( 1) 00:13:16.402 3.319 - 3.342: 98.4846% ( 1) 00:13:16.402 3.390 - 3.413: 98.4925% ( 1) 00:13:16.402 3.461 - 3.484: 98.5162% ( 3) 00:13:16.402 3.484 - 3.508: 98.5241% ( 1) 00:13:16.402 3.508 - 3.532: 98.5478% ( 3) 00:13:16.402 3.532 - 3.556: 98.5556% ( 1) 00:13:16.402 3.556 - 3.579: 98.5635% ( 1) 00:13:16.402 3.627 - 3.650: 98.5714% ( 1) 00:13:16.402 3.793 - 3.816: 98.5793% ( 1) 00:13:16.402 4.077 - 4.101: 98.5872% ( 1) 00:13:16.402 4.124 - 4.148: 98.5951% ( 1) 00:13:16.402 4.243 - 4.267: 98.6030% ( 1) 00:13:16.402 4.267 - 4.290: 98.6109% ( 1) 00:13:16.402 5.120 - 5.144: 98.6188% ( 1) 00:13:16.402 5.333 - 5.357: 98.6267% ( 1) 00:13:16.402 5.452 - 5.476: 98.6346% ( 1) 00:13:16.402 5.476 - 5.499: 98.6425% ( 1) 00:13:16.402 5.523 - 5.547: 98.6504% ( 1) 00:13:16.402 5.879 - 5.902: 98.6661% ( 2) 00:13:16.402 5.902 - 5.926: 98.6740% ( 1) 00:13:16.402 5.950 - 5.973: 98.6819% ( 1) 00:13:16.402 6.258 - 6.305: 98.6898% ( 1) 00:13:16.402 6.305 - 6.353: 98.6977% ( 1) 00:13:16.402 6.400 - 6.447: 98.7056% ( 1) 00:13:16.402 6.542 - 6.590: 98.7135% ( 1) 00:13:16.402 6.590 - 6.637: 9[2024-12-09 04:03:44.685006] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:16.402 8.7214% ( 1) 00:13:16.402 6.732 - 6.779: 98.7293% ( 1) 00:13:16.402 6.779 - 6.827: 98.7372% ( 1) 00:13:16.402 6.827 - 6.874: 98.7451% ( 1) 00:13:16.402 6.921 - 6.969: 98.7530% ( 1) 00:13:16.402 6.969 - 7.016: 98.7609% ( 1) 00:13:16.402 7.253 - 7.301: 98.7687% ( 1) 00:13:16.402 7.396 - 7.443: 98.7766% ( 1) 00:13:16.402 7.490 - 7.538: 98.7845% ( 1) 00:13:16.402 9.766 - 9.813: 98.7924% ( 1) 00:13:16.402 15.644 - 15.739: 98.8003% ( 1) 00:13:16.402 15.739 - 15.834: 98.8082% ( 1) 00:13:16.402 15.834 - 15.929: 98.8477% ( 5) 00:13:16.402 15.929 - 16.024: 98.8950% ( 6) 00:13:16.402 16.024 - 16.119: 98.9661% ( 9) 00:13:16.402 16.119 - 16.213: 98.9740% ( 1) 00:13:16.402 16.213 - 16.308: 98.9897% ( 2) 00:13:16.402 16.308 - 16.403: 99.0055% ( 2) 00:13:16.402 16.403 - 16.498: 99.0134% ( 1) 00:13:16.402 16.498 - 16.593: 99.0845% ( 9) 00:13:16.402 16.593 - 16.687: 99.1160% ( 4) 00:13:16.402 16.687 - 16.782: 99.1476% ( 4) 00:13:16.402 16.782 - 16.877: 99.1792% ( 4) 00:13:16.402 16.877 - 16.972: 99.2028% ( 3) 00:13:16.402 16.972 - 17.067: 99.2265% ( 3) 00:13:16.402 17.067 - 17.161: 99.2344% ( 1) 00:13:16.402 17.161 - 17.256: 99.2581% ( 3) 00:13:16.402 17.351 - 17.446: 99.2660% ( 1) 00:13:16.402 17.541 - 17.636: 99.2739% ( 1) 00:13:16.402 17.636 - 17.730: 99.2818% ( 1) 00:13:16.402 17.825 - 17.920: 99.3054% ( 3) 00:13:16.402 17.920 - 18.015: 99.3133% ( 1) 00:13:16.402 18.204 - 18.299: 99.3212% ( 1) 00:13:16.402 18.299 - 18.394: 99.3291% ( 1) 00:13:16.402 18.394 - 18.489: 99.3370% ( 1) 00:13:16.402 18.868 - 18.963: 99.3449% ( 1) 00:13:16.402 27.307 - 27.496: 99.3528% ( 1) 00:13:16.402 2415.123 - 2427.259: 99.3607% ( 1) 00:13:16.402 2597.167 - 2609.304: 99.3686% ( 1) 00:13:16.402 3980.705 - 4004.978: 99.8185% ( 57) 00:13:16.402 4004.978 - 4029.250: 99.9763% ( 20) 00:13:16.402 4029.250 - 4053.523: 99.9842% ( 1) 00:13:16.402 4975.881 - 5000.154: 99.9921% ( 1) 00:13:16.402 5000.154 - 5024.427: 100.0000% ( 1) 00:13:16.402 00:13:16.402 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:16.402 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:16.402 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:16.402 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:16.402 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:16.660 [ 00:13:16.660 { 00:13:16.660 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:16.660 "subtype": "Discovery", 00:13:16.660 "listen_addresses": [], 00:13:16.660 "allow_any_host": true, 00:13:16.660 "hosts": [] 00:13:16.660 }, 00:13:16.660 { 00:13:16.660 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:16.660 "subtype": "NVMe", 00:13:16.660 "listen_addresses": [ 00:13:16.660 { 00:13:16.660 "trtype": "VFIOUSER", 00:13:16.660 "adrfam": "IPv4", 00:13:16.660 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:16.660 "trsvcid": "0" 00:13:16.660 } 00:13:16.660 ], 00:13:16.660 "allow_any_host": true, 00:13:16.660 "hosts": [], 00:13:16.660 "serial_number": "SPDK1", 00:13:16.660 "model_number": "SPDK bdev Controller", 00:13:16.660 "max_namespaces": 32, 00:13:16.660 "min_cntlid": 1, 00:13:16.660 "max_cntlid": 65519, 00:13:16.660 "namespaces": [ 00:13:16.660 { 00:13:16.660 "nsid": 1, 00:13:16.660 "bdev_name": "Malloc1", 00:13:16.660 "name": "Malloc1", 00:13:16.660 "nguid": "07D9A539FF234D2C94FF04FF7F2B2437", 00:13:16.660 "uuid": "07d9a539-ff23-4d2c-94ff-04ff7f2b2437" 00:13:16.660 }, 00:13:16.660 { 00:13:16.660 "nsid": 2, 00:13:16.660 "bdev_name": "Malloc3", 00:13:16.660 "name": "Malloc3", 00:13:16.660 "nguid": "FE3543652DAC4D0FB8FA008A85669FA7", 00:13:16.660 "uuid": "fe354365-2dac-4d0f-b8fa-008a85669fa7" 00:13:16.660 } 00:13:16.660 ] 00:13:16.660 }, 00:13:16.660 { 00:13:16.660 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:16.660 "subtype": "NVMe", 00:13:16.660 "listen_addresses": [ 00:13:16.660 { 00:13:16.661 "trtype": "VFIOUSER", 00:13:16.661 "adrfam": "IPv4", 00:13:16.661 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:16.661 "trsvcid": "0" 00:13:16.661 } 00:13:16.661 ], 00:13:16.661 "allow_any_host": true, 00:13:16.661 "hosts": [], 00:13:16.661 "serial_number": "SPDK2", 00:13:16.661 "model_number": "SPDK bdev Controller", 00:13:16.661 "max_namespaces": 32, 00:13:16.661 "min_cntlid": 1, 00:13:16.661 "max_cntlid": 65519, 00:13:16.661 "namespaces": [ 00:13:16.661 { 00:13:16.661 "nsid": 1, 00:13:16.661 "bdev_name": "Malloc2", 00:13:16.661 "name": "Malloc2", 00:13:16.661 "nguid": "1F14A502DA0A41F2920C11B007901159", 00:13:16.661 "uuid": "1f14a502-da0a-41f2-920c-11b007901159" 00:13:16.661 } 00:13:16.661 ] 00:13:16.661 } 00:13:16.661 ] 00:13:16.661 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:16.661 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=210264 00:13:16.661 04:03:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:16.661 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:16.661 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:16.661 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:16.661 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:13:16.661 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:13:16.661 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:13:16.661 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:16.661 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:13:16.661 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:13:16.661 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:13:16.661 [2024-12-09 04:03:45.173788] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:16.661 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:16.661 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:16.661 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:16.661 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:16.661 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:17.225 Malloc4 00:13:17.225 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:17.483 [2024-12-09 04:03:45.806648] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:17.483 04:03:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:17.483 Asynchronous Event Request test 00:13:17.483 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:17.483 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:17.483 Registering asynchronous event callbacks... 00:13:17.483 Starting namespace attribute notice tests for all controllers... 00:13:17.483 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:17.483 aer_cb - Changed Namespace 00:13:17.483 Cleaning up... 00:13:17.741 [ 00:13:17.741 { 00:13:17.741 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:17.741 "subtype": "Discovery", 00:13:17.741 "listen_addresses": [], 00:13:17.741 "allow_any_host": true, 00:13:17.741 "hosts": [] 00:13:17.741 }, 00:13:17.741 { 00:13:17.741 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:17.741 "subtype": "NVMe", 00:13:17.741 "listen_addresses": [ 00:13:17.741 { 00:13:17.741 "trtype": "VFIOUSER", 00:13:17.741 "adrfam": "IPv4", 00:13:17.741 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:17.741 "trsvcid": "0" 00:13:17.741 } 00:13:17.741 ], 00:13:17.741 "allow_any_host": true, 00:13:17.741 "hosts": [], 00:13:17.741 "serial_number": "SPDK1", 00:13:17.741 "model_number": "SPDK bdev Controller", 00:13:17.741 "max_namespaces": 32, 00:13:17.741 "min_cntlid": 1, 00:13:17.741 "max_cntlid": 65519, 00:13:17.741 "namespaces": [ 00:13:17.741 { 00:13:17.741 "nsid": 1, 00:13:17.741 "bdev_name": "Malloc1", 00:13:17.741 "name": "Malloc1", 00:13:17.741 "nguid": "07D9A539FF234D2C94FF04FF7F2B2437", 00:13:17.741 "uuid": "07d9a539-ff23-4d2c-94ff-04ff7f2b2437" 00:13:17.741 }, 00:13:17.741 { 00:13:17.741 "nsid": 2, 00:13:17.741 "bdev_name": "Malloc3", 00:13:17.741 "name": "Malloc3", 00:13:17.741 "nguid": "FE3543652DAC4D0FB8FA008A85669FA7", 00:13:17.741 "uuid": "fe354365-2dac-4d0f-b8fa-008a85669fa7" 00:13:17.741 } 00:13:17.741 ] 00:13:17.741 }, 00:13:17.741 { 00:13:17.741 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:17.741 "subtype": "NVMe", 00:13:17.741 "listen_addresses": [ 00:13:17.741 { 00:13:17.741 "trtype": "VFIOUSER", 00:13:17.741 "adrfam": "IPv4", 00:13:17.741 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:17.741 "trsvcid": "0" 00:13:17.741 } 00:13:17.741 ], 00:13:17.741 "allow_any_host": true, 00:13:17.741 "hosts": [], 00:13:17.741 "serial_number": "SPDK2", 00:13:17.741 "model_number": "SPDK bdev Controller", 00:13:17.741 "max_namespaces": 32, 00:13:17.741 "min_cntlid": 1, 00:13:17.741 "max_cntlid": 65519, 00:13:17.741 "namespaces": [ 00:13:17.741 { 00:13:17.741 "nsid": 1, 00:13:17.741 "bdev_name": "Malloc2", 00:13:17.741 "name": "Malloc2", 00:13:17.741 "nguid": "1F14A502DA0A41F2920C11B007901159", 00:13:17.741 "uuid": "1f14a502-da0a-41f2-920c-11b007901159" 00:13:17.741 }, 00:13:17.741 { 00:13:17.741 "nsid": 2, 00:13:17.741 "bdev_name": "Malloc4", 00:13:17.741 "name": "Malloc4", 00:13:17.741 "nguid": "2D95B702C2E440178768BBC44C8575A9", 00:13:17.741 "uuid": "2d95b702-c2e4-4017-8768-bbc44c8575a9" 00:13:17.741 } 00:13:17.741 ] 00:13:17.742 } 00:13:17.742 ] 00:13:17.742 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 210264 00:13:17.742 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:17.742 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 203925 00:13:17.742 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 203925 ']' 00:13:17.742 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 203925 00:13:17.742 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:17.742 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.742 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 203925 00:13:17.742 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.742 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.742 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 203925' 00:13:17.742 killing process with pid 203925 00:13:17.742 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 203925 00:13:17.742 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 203925 00:13:17.999 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:17.999 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:17.999 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:17.999 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:17.999 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:17.999 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=210412 00:13:17.999 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:17.999 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 210412' 00:13:17.999 Process pid: 210412 00:13:17.999 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:17.999 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 210412 00:13:17.999 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 210412 ']' 00:13:17.999 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.999 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:17.999 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.999 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:17.999 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:17.999 [2024-12-09 04:03:46.499644] thread.c:3005:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:17.999 [2024-12-09 04:03:46.500691] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:13:17.999 [2024-12-09 04:03:46.500750] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:17.999 [2024-12-09 04:03:46.565017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:18.257 [2024-12-09 04:03:46.619207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:18.257 [2024-12-09 04:03:46.619270] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:18.257 [2024-12-09 04:03:46.619306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:18.257 [2024-12-09 04:03:46.619318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:18.257 [2024-12-09 04:03:46.619328] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:18.257 [2024-12-09 04:03:46.620718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:18.257 [2024-12-09 04:03:46.620779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.257 [2024-12-09 04:03:46.620846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:18.257 [2024-12-09 04:03:46.620849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.257 [2024-12-09 04:03:46.704020] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:18.257 [2024-12-09 04:03:46.704263] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:18.257 [2024-12-09 04:03:46.704580] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:18.257 [2024-12-09 04:03:46.705209] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:18.257 [2024-12-09 04:03:46.705458] thread.c:2143:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:18.257 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:18.257 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:18.257 04:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:19.189 04:03:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:19.755 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:19.755 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:19.755 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:19.755 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:19.755 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:20.015 Malloc1 00:13:20.015 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:20.273 04:03:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:20.532 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:20.792 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:20.792 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:20.792 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:21.356 Malloc2 00:13:21.356 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:21.614 04:03:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:21.871 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:22.128 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:22.128 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 210412 00:13:22.128 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 210412 ']' 00:13:22.128 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 210412 00:13:22.128 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:22.128 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:22.128 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 210412 00:13:22.128 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:22.128 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:22.128 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 210412' 00:13:22.128 killing process with pid 210412 00:13:22.128 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 210412 00:13:22.128 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 210412 00:13:22.386 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:22.386 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:22.386 00:13:22.386 real 0m54.165s 00:13:22.386 user 3m29.209s 00:13:22.386 sys 0m4.032s 00:13:22.386 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:22.386 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:22.386 ************************************ 00:13:22.386 END TEST nvmf_vfio_user 00:13:22.386 ************************************ 00:13:22.386 04:03:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:22.386 04:03:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:22.386 04:03:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:22.386 04:03:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:22.386 ************************************ 00:13:22.386 START TEST nvmf_vfio_user_nvme_compliance 00:13:22.386 ************************************ 00:13:22.386 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:22.386 * Looking for test storage... 00:13:22.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:22.386 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:22.386 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:13:22.386 04:03:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:22.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.645 --rc genhtml_branch_coverage=1 00:13:22.645 --rc genhtml_function_coverage=1 00:13:22.645 --rc genhtml_legend=1 00:13:22.645 --rc geninfo_all_blocks=1 00:13:22.645 --rc geninfo_unexecuted_blocks=1 00:13:22.645 00:13:22.645 ' 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:22.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.645 --rc genhtml_branch_coverage=1 00:13:22.645 --rc genhtml_function_coverage=1 00:13:22.645 --rc genhtml_legend=1 00:13:22.645 --rc geninfo_all_blocks=1 00:13:22.645 --rc geninfo_unexecuted_blocks=1 00:13:22.645 00:13:22.645 ' 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:22.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.645 --rc genhtml_branch_coverage=1 00:13:22.645 --rc genhtml_function_coverage=1 00:13:22.645 --rc genhtml_legend=1 00:13:22.645 --rc geninfo_all_blocks=1 00:13:22.645 --rc geninfo_unexecuted_blocks=1 00:13:22.645 00:13:22.645 ' 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:22.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:22.645 --rc genhtml_branch_coverage=1 00:13:22.645 --rc genhtml_function_coverage=1 00:13:22.645 --rc genhtml_legend=1 00:13:22.645 --rc geninfo_all_blocks=1 00:13:22.645 --rc geninfo_unexecuted_blocks=1 00:13:22.645 00:13:22.645 ' 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:13:22.645 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:22.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=211026 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 211026' 00:13:22.646 Process pid: 211026 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 211026 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 211026 ']' 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.646 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:22.646 [2024-12-09 04:03:51.100780] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:13:22.646 [2024-12-09 04:03:51.100858] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.646 [2024-12-09 04:03:51.166869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:22.904 [2024-12-09 04:03:51.222441] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.904 [2024-12-09 04:03:51.222493] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.904 [2024-12-09 04:03:51.222520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.904 [2024-12-09 04:03:51.222531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.904 [2024-12-09 04:03:51.222540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.904 [2024-12-09 04:03:51.223986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.904 [2024-12-09 04:03:51.224053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.904 [2024-12-09 04:03:51.224057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.904 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.904 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:13:22.904 04:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:23.837 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:23.837 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:23.837 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:23.837 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.837 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:23.837 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.837 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:23.837 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:23.837 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.837 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:23.837 malloc0 00:13:23.837 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.837 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:23.837 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.837 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:23.837 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.837 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:23.837 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.837 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:24.095 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.095 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:24.095 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.095 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:24.095 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.095 04:03:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:24.095 00:13:24.095 00:13:24.095 CUnit - A unit testing framework for C - Version 2.1-3 00:13:24.095 http://cunit.sourceforge.net/ 00:13:24.095 00:13:24.095 00:13:24.095 Suite: nvme_compliance 00:13:24.095 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-09 04:03:52.602780] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:24.095 [2024-12-09 04:03:52.604321] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:24.095 [2024-12-09 04:03:52.604355] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:24.095 [2024-12-09 04:03:52.604369] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:24.095 [2024-12-09 04:03:52.605794] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.095 passed 00:13:24.353 Test: admin_identify_ctrlr_verify_fused ...[2024-12-09 04:03:52.692416] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:24.353 [2024-12-09 04:03:52.695438] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.353 passed 00:13:24.353 Test: admin_identify_ns ...[2024-12-09 04:03:52.780802] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:24.353 [2024-12-09 04:03:52.840303] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:24.353 [2024-12-09 04:03:52.848293] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:24.353 [2024-12-09 04:03:52.869403] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.353 passed 00:13:24.611 Test: admin_get_features_mandatory_features ...[2024-12-09 04:03:52.952913] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:24.611 [2024-12-09 04:03:52.958945] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.611 passed 00:13:24.611 Test: admin_get_features_optional_features ...[2024-12-09 04:03:53.043519] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:24.611 [2024-12-09 04:03:53.046539] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.611 passed 00:13:24.611 Test: admin_set_features_number_of_queues ...[2024-12-09 04:03:53.128783] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:24.869 [2024-12-09 04:03:53.233390] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.869 passed 00:13:24.869 Test: admin_get_log_page_mandatory_logs ...[2024-12-09 04:03:53.316942] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:24.869 [2024-12-09 04:03:53.319962] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:24.869 passed 00:13:24.869 Test: admin_get_log_page_with_lpo ...[2024-12-09 04:03:53.398754] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:25.127 [2024-12-09 04:03:53.470292] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:25.127 [2024-12-09 04:03:53.483345] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:25.127 passed 00:13:25.127 Test: fabric_property_get ...[2024-12-09 04:03:53.562946] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:25.127 [2024-12-09 04:03:53.564224] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:25.127 [2024-12-09 04:03:53.565967] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:25.127 passed 00:13:25.127 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-09 04:03:53.651532] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:25.127 [2024-12-09 04:03:53.652855] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:25.127 [2024-12-09 04:03:53.654567] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:25.127 passed 00:13:25.386 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-09 04:03:53.736787] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:25.386 [2024-12-09 04:03:53.820280] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:25.386 [2024-12-09 04:03:53.836282] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:25.386 [2024-12-09 04:03:53.841391] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:25.386 passed 00:13:25.386 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-09 04:03:53.924928] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:25.386 [2024-12-09 04:03:53.926251] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:25.386 [2024-12-09 04:03:53.927946] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:25.386 passed 00:13:25.643 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-09 04:03:54.010123] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:25.643 [2024-12-09 04:03:54.087286] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:25.643 [2024-12-09 04:03:54.111281] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:25.643 [2024-12-09 04:03:54.116394] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:25.643 passed 00:13:25.643 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-09 04:03:54.198900] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:25.643 [2024-12-09 04:03:54.200230] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:25.643 [2024-12-09 04:03:54.200282] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:25.643 [2024-12-09 04:03:54.201924] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:25.901 passed 00:13:25.901 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-09 04:03:54.286218] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:25.901 [2024-12-09 04:03:54.383289] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:25.901 [2024-12-09 04:03:54.391294] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:25.901 [2024-12-09 04:03:54.399287] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:25.901 [2024-12-09 04:03:54.407289] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:25.901 [2024-12-09 04:03:54.436380] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:25.901 passed 00:13:26.159 Test: admin_create_io_sq_verify_pc ...[2024-12-09 04:03:54.518609] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:26.159 [2024-12-09 04:03:54.534311] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:26.159 [2024-12-09 04:03:54.551971] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:26.159 passed 00:13:26.159 Test: admin_create_io_qp_max_qps ...[2024-12-09 04:03:54.637589] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:27.544 [2024-12-09 04:03:55.738306] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:13:27.801 [2024-12-09 04:03:56.132934] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:27.801 passed 00:13:27.801 Test: admin_create_io_sq_shared_cq ...[2024-12-09 04:03:56.216337] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:27.801 [2024-12-09 04:03:56.347286] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:28.058 [2024-12-09 04:03:56.387372] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:28.058 passed 00:13:28.058 00:13:28.058 Run Summary: Type Total Ran Passed Failed Inactive 00:13:28.058 suites 1 1 n/a 0 0 00:13:28.058 tests 18 18 18 0 0 00:13:28.058 asserts 360 360 360 0 n/a 00:13:28.058 00:13:28.058 Elapsed time = 1.570 seconds 00:13:28.058 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 211026 00:13:28.058 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 211026 ']' 00:13:28.058 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 211026 00:13:28.058 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:13:28.058 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:28.058 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 211026 00:13:28.058 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:28.058 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:28.058 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 211026' 00:13:28.058 killing process with pid 211026 00:13:28.058 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 211026 00:13:28.058 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 211026 00:13:28.316 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:28.316 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:28.316 00:13:28.316 real 0m5.854s 00:13:28.316 user 0m16.447s 00:13:28.316 sys 0m0.532s 00:13:28.316 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:28.316 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:28.316 ************************************ 00:13:28.316 END TEST nvmf_vfio_user_nvme_compliance 00:13:28.316 ************************************ 00:13:28.316 04:03:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:28.316 04:03:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:28.316 04:03:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:28.316 04:03:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:28.316 ************************************ 00:13:28.316 START TEST nvmf_vfio_user_fuzz 00:13:28.316 ************************************ 00:13:28.316 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:28.316 * Looking for test storage... 00:13:28.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:28.316 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:28.316 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:13:28.316 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:28.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.575 --rc genhtml_branch_coverage=1 00:13:28.575 --rc genhtml_function_coverage=1 00:13:28.575 --rc genhtml_legend=1 00:13:28.575 --rc geninfo_all_blocks=1 00:13:28.575 --rc geninfo_unexecuted_blocks=1 00:13:28.575 00:13:28.575 ' 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:28.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.575 --rc genhtml_branch_coverage=1 00:13:28.575 --rc genhtml_function_coverage=1 00:13:28.575 --rc genhtml_legend=1 00:13:28.575 --rc geninfo_all_blocks=1 00:13:28.575 --rc geninfo_unexecuted_blocks=1 00:13:28.575 00:13:28.575 ' 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:28.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.575 --rc genhtml_branch_coverage=1 00:13:28.575 --rc genhtml_function_coverage=1 00:13:28.575 --rc genhtml_legend=1 00:13:28.575 --rc geninfo_all_blocks=1 00:13:28.575 --rc geninfo_unexecuted_blocks=1 00:13:28.575 00:13:28.575 ' 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:28.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.575 --rc genhtml_branch_coverage=1 00:13:28.575 --rc genhtml_function_coverage=1 00:13:28.575 --rc genhtml_legend=1 00:13:28.575 --rc geninfo_all_blocks=1 00:13:28.575 --rc geninfo_unexecuted_blocks=1 00:13:28.575 00:13:28.575 ' 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:28.575 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:28.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=211846 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 211846' 00:13:28.576 Process pid: 211846 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 211846 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 211846 ']' 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:28.576 04:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:28.834 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.834 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:13:28.834 04:03:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:29.766 malloc0 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:29.766 04:03:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:01.823 Fuzzing completed. Shutting down the fuzz application 00:14:01.823 00:14:01.823 Dumping successful admin opcodes: 00:14:01.823 9, 10, 00:14:01.823 Dumping successful io opcodes: 00:14:01.823 0, 00:14:01.823 NS: 0x20000081ef00 I/O qp, Total commands completed: 676156, total successful commands: 2634, random_seed: 3906279744 00:14:01.823 NS: 0x20000081ef00 admin qp, Total commands completed: 124240, total successful commands: 29, random_seed: 965392896 00:14:01.823 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:01.823 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.823 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:01.823 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.823 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 211846 00:14:01.823 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 211846 ']' 00:14:01.823 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 211846 00:14:01.823 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:14:01.823 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:01.823 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 211846 00:14:01.823 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:01.823 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:01.823 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 211846' 00:14:01.823 killing process with pid 211846 00:14:01.823 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 211846 00:14:01.823 04:04:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 211846 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:01.823 00:14:01.823 real 0m32.278s 00:14:01.823 user 0m33.054s 00:14:01.823 sys 0m26.054s 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:01.823 ************************************ 00:14:01.823 END TEST nvmf_vfio_user_fuzz 00:14:01.823 ************************************ 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:01.823 ************************************ 00:14:01.823 START TEST nvmf_auth_target 00:14:01.823 ************************************ 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:01.823 * Looking for test storage... 00:14:01.823 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:01.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.823 --rc genhtml_branch_coverage=1 00:14:01.823 --rc genhtml_function_coverage=1 00:14:01.823 --rc genhtml_legend=1 00:14:01.823 --rc geninfo_all_blocks=1 00:14:01.823 --rc geninfo_unexecuted_blocks=1 00:14:01.823 00:14:01.823 ' 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:01.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.823 --rc genhtml_branch_coverage=1 00:14:01.823 --rc genhtml_function_coverage=1 00:14:01.823 --rc genhtml_legend=1 00:14:01.823 --rc geninfo_all_blocks=1 00:14:01.823 --rc geninfo_unexecuted_blocks=1 00:14:01.823 00:14:01.823 ' 00:14:01.823 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:01.823 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.823 --rc genhtml_branch_coverage=1 00:14:01.823 --rc genhtml_function_coverage=1 00:14:01.823 --rc genhtml_legend=1 00:14:01.823 --rc geninfo_all_blocks=1 00:14:01.823 --rc geninfo_unexecuted_blocks=1 00:14:01.823 00:14:01.823 ' 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:01.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:01.824 --rc genhtml_branch_coverage=1 00:14:01.824 --rc genhtml_function_coverage=1 00:14:01.824 --rc genhtml_legend=1 00:14:01.824 --rc geninfo_all_blocks=1 00:14:01.824 --rc geninfo_unexecuted_blocks=1 00:14:01.824 00:14:01.824 ' 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:01.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:14:01.824 04:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:03.203 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:03.204 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:03.204 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:03.204 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:03.204 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:03.204 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:03.204 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:14:03.204 00:14:03.204 --- 10.0.0.2 ping statistics --- 00:14:03.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.204 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:03.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:03.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:14:03.204 00:14:03.204 --- 10.0.0.1 ping statistics --- 00:14:03.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:03.204 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=217205 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 217205 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 217205 ']' 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.204 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=217228 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=4e5463775e454d924c31e6b543edc7658848ec1a7123d40c 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.PM3 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 4e5463775e454d924c31e6b543edc7658848ec1a7123d40c 0 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 4e5463775e454d924c31e6b543edc7658848ec1a7123d40c 0 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=4e5463775e454d924c31e6b543edc7658848ec1a7123d40c 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.PM3 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.PM3 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.PM3 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fa77212f0183d21c15428b6dd83e15a1084d5bb83c82c1e4425151f481a09a83 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.RCN 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fa77212f0183d21c15428b6dd83e15a1084d5bb83c82c1e4425151f481a09a83 3 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fa77212f0183d21c15428b6dd83e15a1084d5bb83c82c1e4425151f481a09a83 3 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fa77212f0183d21c15428b6dd83e15a1084d5bb83c82c1e4425151f481a09a83 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:03.463 04:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:03.463 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.RCN 00:14:03.463 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.RCN 00:14:03.463 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.RCN 00:14:03.463 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:03.463 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:03.463 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:03.463 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:03.463 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:03.463 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:03.463 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:03.463 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=dcf8b536e14513688d8f498e3a27e6f7 00:14:03.463 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:03.722 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.tVR 00:14:03.722 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key dcf8b536e14513688d8f498e3a27e6f7 1 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 dcf8b536e14513688d8f498e3a27e6f7 1 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=dcf8b536e14513688d8f498e3a27e6f7 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.tVR 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.tVR 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.tVR 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cc838851ed954d997310cb760184d0df116e421cdcbf58de 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.1NG 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cc838851ed954d997310cb760184d0df116e421cdcbf58de 2 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cc838851ed954d997310cb760184d0df116e421cdcbf58de 2 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cc838851ed954d997310cb760184d0df116e421cdcbf58de 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.1NG 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.1NG 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.1NG 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7d2252b4f74e7df434bbb64c149271cc1c2407c9f3d8400b 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.r4l 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7d2252b4f74e7df434bbb64c149271cc1c2407c9f3d8400b 2 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7d2252b4f74e7df434bbb64c149271cc1c2407c9f3d8400b 2 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7d2252b4f74e7df434bbb64c149271cc1c2407c9f3d8400b 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.r4l 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.r4l 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.r4l 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=38c08662acc3c6e7796d807a72edb3e3 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Fwb 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 38c08662acc3c6e7796d807a72edb3e3 1 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 38c08662acc3c6e7796d807a72edb3e3 1 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=38c08662acc3c6e7796d807a72edb3e3 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Fwb 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Fwb 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Fwb 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=39028326f3b335bfee9e96186ab5d3edf70b1e0f6b0a5c315fcc0c80909d5d70 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.y9n 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 39028326f3b335bfee9e96186ab5d3edf70b1e0f6b0a5c315fcc0c80909d5d70 3 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 39028326f3b335bfee9e96186ab5d3edf70b1e0f6b0a5c315fcc0c80909d5d70 3 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=39028326f3b335bfee9e96186ab5d3edf70b1e0f6b0a5c315fcc0c80909d5d70 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.y9n 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.y9n 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.y9n 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 217205 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 217205 ']' 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:03.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:03.723 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 217228 /var/tmp/host.sock 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 217228 ']' 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:04.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.PM3 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.PM3 00:14:04.290 04:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.PM3 00:14:04.548 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.RCN ]] 00:14:04.548 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RCN 00:14:04.548 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.548 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.805 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.805 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RCN 00:14:04.805 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RCN 00:14:05.062 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:05.062 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.tVR 00:14:05.062 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.062 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.062 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.062 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.tVR 00:14:05.062 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.tVR 00:14:05.319 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.1NG ]] 00:14:05.319 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1NG 00:14:05.319 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.319 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.319 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.319 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1NG 00:14:05.319 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1NG 00:14:05.576 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:05.576 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.r4l 00:14:05.576 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.576 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.576 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.576 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.r4l 00:14:05.576 04:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.r4l 00:14:05.833 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Fwb ]] 00:14:05.833 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fwb 00:14:05.833 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.833 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.833 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.833 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fwb 00:14:05.833 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fwb 00:14:06.090 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:06.090 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.y9n 00:14:06.090 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.090 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.090 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.090 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.y9n 00:14:06.090 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.y9n 00:14:06.346 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:06.346 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:06.346 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:06.346 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:06.346 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:06.346 04:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:06.603 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:06.603 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:06.603 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:06.603 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:06.603 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:06.603 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.603 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.603 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.603 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.603 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.603 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.603 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.603 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.860 00:14:06.860 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.860 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:06.860 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.117 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.117 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.117 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.117 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.374 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.374 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:07.374 { 00:14:07.374 "cntlid": 1, 00:14:07.374 "qid": 0, 00:14:07.374 "state": "enabled", 00:14:07.374 "thread": "nvmf_tgt_poll_group_000", 00:14:07.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:07.374 "listen_address": { 00:14:07.374 "trtype": "TCP", 00:14:07.374 "adrfam": "IPv4", 00:14:07.374 "traddr": "10.0.0.2", 00:14:07.374 "trsvcid": "4420" 00:14:07.374 }, 00:14:07.374 "peer_address": { 00:14:07.374 "trtype": "TCP", 00:14:07.374 "adrfam": "IPv4", 00:14:07.374 "traddr": "10.0.0.1", 00:14:07.374 "trsvcid": "60866" 00:14:07.374 }, 00:14:07.374 "auth": { 00:14:07.374 "state": "completed", 00:14:07.374 "digest": "sha256", 00:14:07.374 "dhgroup": "null" 00:14:07.374 } 00:14:07.374 } 00:14:07.374 ]' 00:14:07.374 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:07.374 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:07.374 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:07.375 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:07.375 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:07.375 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.375 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.375 04:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.632 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:14:07.632 04:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:14:12.893 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.893 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:12.893 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.893 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.893 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.893 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.893 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:12.893 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:12.893 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:12.893 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.893 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:12.893 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:12.893 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:12.893 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.893 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.893 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.893 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.893 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.893 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.893 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.893 04:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:12.893 00:14:12.893 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.893 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.893 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:13.151 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:13.151 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:13.151 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.151 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.151 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.151 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:13.151 { 00:14:13.151 "cntlid": 3, 00:14:13.151 "qid": 0, 00:14:13.151 "state": "enabled", 00:14:13.151 "thread": "nvmf_tgt_poll_group_000", 00:14:13.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:13.151 "listen_address": { 00:14:13.151 "trtype": "TCP", 00:14:13.151 "adrfam": "IPv4", 00:14:13.151 "traddr": "10.0.0.2", 00:14:13.151 "trsvcid": "4420" 00:14:13.151 }, 00:14:13.151 "peer_address": { 00:14:13.151 "trtype": "TCP", 00:14:13.151 "adrfam": "IPv4", 00:14:13.151 "traddr": "10.0.0.1", 00:14:13.151 "trsvcid": "52846" 00:14:13.151 }, 00:14:13.151 "auth": { 00:14:13.151 "state": "completed", 00:14:13.151 "digest": "sha256", 00:14:13.151 "dhgroup": "null" 00:14:13.151 } 00:14:13.151 } 00:14:13.151 ]' 00:14:13.151 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:13.151 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:13.151 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.151 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:13.151 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.151 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.151 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.151 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.409 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:14:13.409 04:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:14:14.342 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:14.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:14.342 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:14.342 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.342 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.342 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.342 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:14.342 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:14.342 04:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:14.600 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:14.600 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:14.600 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:14.600 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:14.600 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:14.600 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.600 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.600 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.600 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.600 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.600 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.600 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.600 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:14.857 00:14:15.114 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:15.114 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:15.114 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:15.371 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:15.371 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:15.371 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.371 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.371 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.371 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:15.371 { 00:14:15.371 "cntlid": 5, 00:14:15.371 "qid": 0, 00:14:15.371 "state": "enabled", 00:14:15.371 "thread": "nvmf_tgt_poll_group_000", 00:14:15.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:15.371 "listen_address": { 00:14:15.371 "trtype": "TCP", 00:14:15.371 "adrfam": "IPv4", 00:14:15.371 "traddr": "10.0.0.2", 00:14:15.371 "trsvcid": "4420" 00:14:15.371 }, 00:14:15.371 "peer_address": { 00:14:15.371 "trtype": "TCP", 00:14:15.371 "adrfam": "IPv4", 00:14:15.371 "traddr": "10.0.0.1", 00:14:15.371 "trsvcid": "52872" 00:14:15.371 }, 00:14:15.371 "auth": { 00:14:15.371 "state": "completed", 00:14:15.371 "digest": "sha256", 00:14:15.371 "dhgroup": "null" 00:14:15.371 } 00:14:15.371 } 00:14:15.371 ]' 00:14:15.371 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.371 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:15.371 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.371 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:15.371 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:15.371 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.371 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.371 04:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.628 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:14:15.628 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:14:16.560 04:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:16.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:16.560 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:16.560 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.560 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.560 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.560 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:16.560 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:16.560 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:16.817 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:16.817 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.817 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:16.817 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:16.817 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:16.817 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.817 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:16.817 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.817 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.817 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.817 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:16.817 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:16.817 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:17.074 00:14:17.074 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:17.074 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:17.074 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:17.331 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:17.331 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:17.331 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.331 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.331 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.331 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:17.331 { 00:14:17.331 "cntlid": 7, 00:14:17.331 "qid": 0, 00:14:17.331 "state": "enabled", 00:14:17.331 "thread": "nvmf_tgt_poll_group_000", 00:14:17.331 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:17.331 "listen_address": { 00:14:17.331 "trtype": "TCP", 00:14:17.331 "adrfam": "IPv4", 00:14:17.331 "traddr": "10.0.0.2", 00:14:17.331 "trsvcid": "4420" 00:14:17.331 }, 00:14:17.331 "peer_address": { 00:14:17.331 "trtype": "TCP", 00:14:17.331 "adrfam": "IPv4", 00:14:17.331 "traddr": "10.0.0.1", 00:14:17.331 "trsvcid": "36186" 00:14:17.331 }, 00:14:17.331 "auth": { 00:14:17.331 "state": "completed", 00:14:17.331 "digest": "sha256", 00:14:17.331 "dhgroup": "null" 00:14:17.331 } 00:14:17.331 } 00:14:17.331 ]' 00:14:17.331 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:17.588 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:17.588 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:17.588 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:17.588 04:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:17.588 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.588 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.588 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.846 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:14:17.846 04:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:14:18.780 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.780 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.780 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:18.780 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.780 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.780 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.780 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:18.780 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:18.780 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:18.780 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:19.039 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:19.039 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:19.039 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:19.039 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:19.039 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:19.039 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:19.039 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.039 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.039 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.039 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.039 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.039 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.039 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:19.297 00:14:19.297 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:19.297 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:19.297 04:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:19.555 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:19.555 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:19.555 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.555 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.555 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.555 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:19.555 { 00:14:19.555 "cntlid": 9, 00:14:19.555 "qid": 0, 00:14:19.555 "state": "enabled", 00:14:19.555 "thread": "nvmf_tgt_poll_group_000", 00:14:19.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:19.555 "listen_address": { 00:14:19.555 "trtype": "TCP", 00:14:19.555 "adrfam": "IPv4", 00:14:19.555 "traddr": "10.0.0.2", 00:14:19.555 "trsvcid": "4420" 00:14:19.555 }, 00:14:19.555 "peer_address": { 00:14:19.555 "trtype": "TCP", 00:14:19.555 "adrfam": "IPv4", 00:14:19.555 "traddr": "10.0.0.1", 00:14:19.555 "trsvcid": "36214" 00:14:19.555 }, 00:14:19.555 "auth": { 00:14:19.555 "state": "completed", 00:14:19.555 "digest": "sha256", 00:14:19.555 "dhgroup": "ffdhe2048" 00:14:19.555 } 00:14:19.555 } 00:14:19.555 ]' 00:14:19.555 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:19.555 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:19.555 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:19.814 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:19.814 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:19.814 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.814 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.814 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:20.073 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:14:20.073 04:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:14:21.006 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.006 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:21.006 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.006 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.006 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.006 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:21.006 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:21.006 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:21.265 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:21.265 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:21.265 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:21.265 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:21.265 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:21.265 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:21.265 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.265 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.265 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.265 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.265 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.265 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.265 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:21.523 00:14:21.523 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:21.523 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:21.524 04:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.782 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.782 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.782 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.782 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.782 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.782 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.782 { 00:14:21.782 "cntlid": 11, 00:14:21.782 "qid": 0, 00:14:21.782 "state": "enabled", 00:14:21.782 "thread": "nvmf_tgt_poll_group_000", 00:14:21.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:21.782 "listen_address": { 00:14:21.782 "trtype": "TCP", 00:14:21.782 "adrfam": "IPv4", 00:14:21.782 "traddr": "10.0.0.2", 00:14:21.782 "trsvcid": "4420" 00:14:21.782 }, 00:14:21.782 "peer_address": { 00:14:21.782 "trtype": "TCP", 00:14:21.782 "adrfam": "IPv4", 00:14:21.782 "traddr": "10.0.0.1", 00:14:21.782 "trsvcid": "36232" 00:14:21.782 }, 00:14:21.782 "auth": { 00:14:21.782 "state": "completed", 00:14:21.782 "digest": "sha256", 00:14:21.782 "dhgroup": "ffdhe2048" 00:14:21.782 } 00:14:21.782 } 00:14:21.782 ]' 00:14:21.782 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.782 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:21.782 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.782 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:21.782 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:22.041 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:22.041 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:22.041 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:22.299 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:14:22.299 04:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:14:23.232 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:23.232 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:23.232 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:23.232 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.232 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.232 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.232 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:23.232 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:23.232 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:23.490 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:23.490 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:23.490 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:23.490 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:23.490 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:23.490 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:23.490 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.490 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.490 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.490 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.490 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.490 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.490 04:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:23.748 00:14:23.748 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:23.748 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.748 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:24.006 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:24.006 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:24.006 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.006 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.006 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.006 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:24.006 { 00:14:24.006 "cntlid": 13, 00:14:24.006 "qid": 0, 00:14:24.006 "state": "enabled", 00:14:24.006 "thread": "nvmf_tgt_poll_group_000", 00:14:24.006 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:24.006 "listen_address": { 00:14:24.006 "trtype": "TCP", 00:14:24.006 "adrfam": "IPv4", 00:14:24.006 "traddr": "10.0.0.2", 00:14:24.006 "trsvcid": "4420" 00:14:24.006 }, 00:14:24.006 "peer_address": { 00:14:24.006 "trtype": "TCP", 00:14:24.006 "adrfam": "IPv4", 00:14:24.006 "traddr": "10.0.0.1", 00:14:24.006 "trsvcid": "36272" 00:14:24.006 }, 00:14:24.006 "auth": { 00:14:24.006 "state": "completed", 00:14:24.006 "digest": "sha256", 00:14:24.006 "dhgroup": "ffdhe2048" 00:14:24.006 } 00:14:24.006 } 00:14:24.006 ]' 00:14:24.006 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:24.006 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:24.006 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:24.006 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:24.006 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:24.263 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:24.263 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:24.263 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:24.521 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:14:24.521 04:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:14:25.453 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:25.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:25.453 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:25.453 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.453 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.453 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.453 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:25.453 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:25.453 04:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:25.453 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:25.453 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:25.453 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:25.453 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:25.711 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:25.711 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.711 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:25.711 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.711 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.711 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.711 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:25.711 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:25.711 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:25.968 00:14:25.968 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:25.968 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:25.968 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.225 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:26.225 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:26.225 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.225 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.225 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.225 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:26.225 { 00:14:26.225 "cntlid": 15, 00:14:26.225 "qid": 0, 00:14:26.225 "state": "enabled", 00:14:26.225 "thread": "nvmf_tgt_poll_group_000", 00:14:26.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:26.225 "listen_address": { 00:14:26.225 "trtype": "TCP", 00:14:26.225 "adrfam": "IPv4", 00:14:26.225 "traddr": "10.0.0.2", 00:14:26.225 "trsvcid": "4420" 00:14:26.225 }, 00:14:26.225 "peer_address": { 00:14:26.225 "trtype": "TCP", 00:14:26.225 "adrfam": "IPv4", 00:14:26.225 "traddr": "10.0.0.1", 00:14:26.225 "trsvcid": "36292" 00:14:26.225 }, 00:14:26.225 "auth": { 00:14:26.225 "state": "completed", 00:14:26.225 "digest": "sha256", 00:14:26.225 "dhgroup": "ffdhe2048" 00:14:26.225 } 00:14:26.225 } 00:14:26.225 ]' 00:14:26.225 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:26.225 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:26.225 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:26.225 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:26.225 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:26.225 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:26.225 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:26.225 04:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:26.481 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:14:26.481 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:14:27.409 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.410 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:27.410 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.410 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.410 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.410 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:27.410 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:27.410 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:27.410 04:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:27.666 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:14:27.666 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:27.666 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:27.666 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:27.666 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:27.666 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.666 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.666 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.666 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.666 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.666 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.666 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.666 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:28.229 00:14:28.229 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:28.229 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:28.229 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:28.487 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:28.487 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:28.487 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.487 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.487 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.487 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:28.487 { 00:14:28.487 "cntlid": 17, 00:14:28.487 "qid": 0, 00:14:28.487 "state": "enabled", 00:14:28.487 "thread": "nvmf_tgt_poll_group_000", 00:14:28.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:28.487 "listen_address": { 00:14:28.487 "trtype": "TCP", 00:14:28.487 "adrfam": "IPv4", 00:14:28.487 "traddr": "10.0.0.2", 00:14:28.487 "trsvcid": "4420" 00:14:28.487 }, 00:14:28.487 "peer_address": { 00:14:28.487 "trtype": "TCP", 00:14:28.487 "adrfam": "IPv4", 00:14:28.487 "traddr": "10.0.0.1", 00:14:28.487 "trsvcid": "52272" 00:14:28.487 }, 00:14:28.487 "auth": { 00:14:28.487 "state": "completed", 00:14:28.487 "digest": "sha256", 00:14:28.487 "dhgroup": "ffdhe3072" 00:14:28.487 } 00:14:28.487 } 00:14:28.487 ]' 00:14:28.487 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:28.487 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.487 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:28.487 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:28.487 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:28.487 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.487 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.487 04:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.750 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:14:28.750 04:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:14:29.681 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.682 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.682 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.682 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.682 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.682 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:29.682 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:29.682 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:29.939 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:14:29.939 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:29.939 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:29.940 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:29.940 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:29.940 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.940 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.940 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.940 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.940 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.940 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.940 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.940 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:30.505 00:14:30.505 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:30.505 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.505 04:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:30.762 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.762 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.762 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.762 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.762 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.762 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:30.762 { 00:14:30.762 "cntlid": 19, 00:14:30.762 "qid": 0, 00:14:30.762 "state": "enabled", 00:14:30.762 "thread": "nvmf_tgt_poll_group_000", 00:14:30.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:30.762 "listen_address": { 00:14:30.762 "trtype": "TCP", 00:14:30.762 "adrfam": "IPv4", 00:14:30.762 "traddr": "10.0.0.2", 00:14:30.762 "trsvcid": "4420" 00:14:30.762 }, 00:14:30.762 "peer_address": { 00:14:30.762 "trtype": "TCP", 00:14:30.763 "adrfam": "IPv4", 00:14:30.763 "traddr": "10.0.0.1", 00:14:30.763 "trsvcid": "52308" 00:14:30.763 }, 00:14:30.763 "auth": { 00:14:30.763 "state": "completed", 00:14:30.763 "digest": "sha256", 00:14:30.763 "dhgroup": "ffdhe3072" 00:14:30.763 } 00:14:30.763 } 00:14:30.763 ]' 00:14:30.763 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:30.763 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.763 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:30.763 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:30.763 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:30.763 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.763 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.763 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.020 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:14:31.020 04:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:14:31.954 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.954 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.954 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.954 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.954 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.954 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:31.954 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:31.954 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:32.213 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:14:32.213 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.213 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:32.213 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:32.213 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:32.213 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.213 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.213 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.213 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.213 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.213 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.213 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.213 04:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:32.778 00:14:32.778 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:32.778 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:32.778 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.034 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.034 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.034 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.034 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.034 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.034 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.034 { 00:14:33.034 "cntlid": 21, 00:14:33.034 "qid": 0, 00:14:33.034 "state": "enabled", 00:14:33.034 "thread": "nvmf_tgt_poll_group_000", 00:14:33.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:33.034 "listen_address": { 00:14:33.034 "trtype": "TCP", 00:14:33.034 "adrfam": "IPv4", 00:14:33.034 "traddr": "10.0.0.2", 00:14:33.034 "trsvcid": "4420" 00:14:33.035 }, 00:14:33.035 "peer_address": { 00:14:33.035 "trtype": "TCP", 00:14:33.035 "adrfam": "IPv4", 00:14:33.035 "traddr": "10.0.0.1", 00:14:33.035 "trsvcid": "52340" 00:14:33.035 }, 00:14:33.035 "auth": { 00:14:33.035 "state": "completed", 00:14:33.035 "digest": "sha256", 00:14:33.035 "dhgroup": "ffdhe3072" 00:14:33.035 } 00:14:33.035 } 00:14:33.035 ]' 00:14:33.035 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.035 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:33.035 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.035 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:33.035 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.035 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.035 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.035 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.291 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:14:33.291 04:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:14:34.221 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.221 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:34.221 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.221 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.221 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.221 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.221 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:34.221 04:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:34.478 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:14:34.478 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.478 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:34.478 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:34.478 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:34.478 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.478 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:34.478 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.478 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.478 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.478 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:34.478 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:34.478 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:35.053 00:14:35.053 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.053 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.053 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.310 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.310 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.310 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.310 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.310 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.310 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.310 { 00:14:35.310 "cntlid": 23, 00:14:35.310 "qid": 0, 00:14:35.310 "state": "enabled", 00:14:35.310 "thread": "nvmf_tgt_poll_group_000", 00:14:35.310 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:35.310 "listen_address": { 00:14:35.310 "trtype": "TCP", 00:14:35.310 "adrfam": "IPv4", 00:14:35.310 "traddr": "10.0.0.2", 00:14:35.310 "trsvcid": "4420" 00:14:35.310 }, 00:14:35.310 "peer_address": { 00:14:35.310 "trtype": "TCP", 00:14:35.310 "adrfam": "IPv4", 00:14:35.310 "traddr": "10.0.0.1", 00:14:35.310 "trsvcid": "52364" 00:14:35.310 }, 00:14:35.310 "auth": { 00:14:35.310 "state": "completed", 00:14:35.310 "digest": "sha256", 00:14:35.310 "dhgroup": "ffdhe3072" 00:14:35.310 } 00:14:35.310 } 00:14:35.310 ]' 00:14:35.310 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.310 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.310 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.310 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:35.310 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.310 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.310 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.310 04:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.567 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:14:35.567 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:14:36.497 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.497 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:36.497 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.497 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.497 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.497 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:36.497 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.497 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:36.497 04:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:36.758 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:14:36.758 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.758 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:36.758 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:36.758 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:36.758 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.758 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.759 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.759 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.759 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.759 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.759 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:36.759 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:37.324 00:14:37.324 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.324 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.324 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.582 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.582 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.582 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.582 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.582 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.582 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.582 { 00:14:37.582 "cntlid": 25, 00:14:37.582 "qid": 0, 00:14:37.582 "state": "enabled", 00:14:37.582 "thread": "nvmf_tgt_poll_group_000", 00:14:37.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:37.582 "listen_address": { 00:14:37.582 "trtype": "TCP", 00:14:37.582 "adrfam": "IPv4", 00:14:37.582 "traddr": "10.0.0.2", 00:14:37.582 "trsvcid": "4420" 00:14:37.582 }, 00:14:37.582 "peer_address": { 00:14:37.582 "trtype": "TCP", 00:14:37.582 "adrfam": "IPv4", 00:14:37.582 "traddr": "10.0.0.1", 00:14:37.582 "trsvcid": "58030" 00:14:37.582 }, 00:14:37.582 "auth": { 00:14:37.582 "state": "completed", 00:14:37.582 "digest": "sha256", 00:14:37.582 "dhgroup": "ffdhe4096" 00:14:37.582 } 00:14:37.582 } 00:14:37.582 ]' 00:14:37.582 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.582 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.582 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.582 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:37.582 04:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.582 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.582 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.582 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.840 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:14:37.840 04:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:14:38.773 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.773 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.773 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.773 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.773 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.773 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.773 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:38.773 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:39.031 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:14:39.031 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:39.031 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:39.031 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:39.031 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:39.031 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.031 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.031 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.031 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.031 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.031 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.031 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.031 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:39.596 00:14:39.596 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.596 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.596 04:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.854 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.855 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.855 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.855 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.855 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.855 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.855 { 00:14:39.855 "cntlid": 27, 00:14:39.855 "qid": 0, 00:14:39.855 "state": "enabled", 00:14:39.855 "thread": "nvmf_tgt_poll_group_000", 00:14:39.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:39.855 "listen_address": { 00:14:39.855 "trtype": "TCP", 00:14:39.855 "adrfam": "IPv4", 00:14:39.855 "traddr": "10.0.0.2", 00:14:39.855 "trsvcid": "4420" 00:14:39.855 }, 00:14:39.855 "peer_address": { 00:14:39.855 "trtype": "TCP", 00:14:39.855 "adrfam": "IPv4", 00:14:39.855 "traddr": "10.0.0.1", 00:14:39.855 "trsvcid": "58056" 00:14:39.855 }, 00:14:39.855 "auth": { 00:14:39.855 "state": "completed", 00:14:39.855 "digest": "sha256", 00:14:39.855 "dhgroup": "ffdhe4096" 00:14:39.855 } 00:14:39.855 } 00:14:39.855 ]' 00:14:39.855 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.855 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.855 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.855 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:39.855 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.855 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.855 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.855 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.113 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:14:40.113 04:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:14:41.046 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.046 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:41.047 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.047 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.047 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.047 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:41.047 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:41.047 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:41.304 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:14:41.304 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:41.304 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:41.304 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:41.304 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:41.304 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.304 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.304 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.304 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.304 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.304 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.304 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.304 04:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:41.870 00:14:41.871 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.871 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.871 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.129 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.129 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.129 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.129 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.129 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.129 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:42.129 { 00:14:42.129 "cntlid": 29, 00:14:42.129 "qid": 0, 00:14:42.129 "state": "enabled", 00:14:42.129 "thread": "nvmf_tgt_poll_group_000", 00:14:42.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:42.129 "listen_address": { 00:14:42.129 "trtype": "TCP", 00:14:42.129 "adrfam": "IPv4", 00:14:42.129 "traddr": "10.0.0.2", 00:14:42.129 "trsvcid": "4420" 00:14:42.129 }, 00:14:42.129 "peer_address": { 00:14:42.129 "trtype": "TCP", 00:14:42.129 "adrfam": "IPv4", 00:14:42.129 "traddr": "10.0.0.1", 00:14:42.129 "trsvcid": "58090" 00:14:42.129 }, 00:14:42.129 "auth": { 00:14:42.129 "state": "completed", 00:14:42.129 "digest": "sha256", 00:14:42.129 "dhgroup": "ffdhe4096" 00:14:42.129 } 00:14:42.129 } 00:14:42.129 ]' 00:14:42.129 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:42.129 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.129 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:42.129 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:42.129 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.129 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.129 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.129 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.386 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:14:42.386 04:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:14:43.317 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.317 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:43.317 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.317 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.317 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.317 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:43.317 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:43.317 04:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:43.574 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:14:43.574 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.574 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:43.574 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:43.574 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:43.574 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.574 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:43.574 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.574 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.574 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.574 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:43.574 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:43.574 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:44.139 00:14:44.139 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:44.139 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.139 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:44.395 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.395 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.395 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.395 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.395 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.395 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:44.395 { 00:14:44.395 "cntlid": 31, 00:14:44.395 "qid": 0, 00:14:44.395 "state": "enabled", 00:14:44.395 "thread": "nvmf_tgt_poll_group_000", 00:14:44.395 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:44.395 "listen_address": { 00:14:44.395 "trtype": "TCP", 00:14:44.395 "adrfam": "IPv4", 00:14:44.395 "traddr": "10.0.0.2", 00:14:44.395 "trsvcid": "4420" 00:14:44.395 }, 00:14:44.395 "peer_address": { 00:14:44.395 "trtype": "TCP", 00:14:44.395 "adrfam": "IPv4", 00:14:44.395 "traddr": "10.0.0.1", 00:14:44.395 "trsvcid": "58100" 00:14:44.395 }, 00:14:44.395 "auth": { 00:14:44.395 "state": "completed", 00:14:44.395 "digest": "sha256", 00:14:44.395 "dhgroup": "ffdhe4096" 00:14:44.395 } 00:14:44.395 } 00:14:44.395 ]' 00:14:44.395 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:44.395 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.395 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.395 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:44.395 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.395 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.395 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.395 04:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.958 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:14:44.958 04:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:14:45.521 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.777 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:45.777 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.777 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.777 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.777 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:45.777 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.777 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:45.777 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:46.034 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:14:46.034 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:46.034 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:46.034 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:46.034 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:46.034 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.034 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.034 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.034 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.034 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.034 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.034 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.034 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:46.599 00:14:46.599 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:46.599 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:46.599 04:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.857 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.857 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.857 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.857 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.857 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.857 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.857 { 00:14:46.857 "cntlid": 33, 00:14:46.857 "qid": 0, 00:14:46.857 "state": "enabled", 00:14:46.857 "thread": "nvmf_tgt_poll_group_000", 00:14:46.857 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:46.857 "listen_address": { 00:14:46.857 "trtype": "TCP", 00:14:46.857 "adrfam": "IPv4", 00:14:46.857 "traddr": "10.0.0.2", 00:14:46.857 "trsvcid": "4420" 00:14:46.857 }, 00:14:46.857 "peer_address": { 00:14:46.857 "trtype": "TCP", 00:14:46.857 "adrfam": "IPv4", 00:14:46.857 "traddr": "10.0.0.1", 00:14:46.857 "trsvcid": "58130" 00:14:46.857 }, 00:14:46.857 "auth": { 00:14:46.857 "state": "completed", 00:14:46.857 "digest": "sha256", 00:14:46.857 "dhgroup": "ffdhe6144" 00:14:46.857 } 00:14:46.857 } 00:14:46.857 ]' 00:14:46.857 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.857 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.857 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.857 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:46.857 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.857 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.857 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.857 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.115 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:14:47.115 04:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:14:48.045 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.045 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.045 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.045 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.045 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.045 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.045 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.045 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:48.045 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:48.304 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:14:48.304 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:48.304 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:48.304 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:48.304 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:48.304 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:48.304 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.304 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.304 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.304 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.304 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.304 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.304 04:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:48.869 00:14:48.869 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.869 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.869 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.434 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.434 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.434 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.434 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.434 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.434 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.434 { 00:14:49.434 "cntlid": 35, 00:14:49.434 "qid": 0, 00:14:49.434 "state": "enabled", 00:14:49.434 "thread": "nvmf_tgt_poll_group_000", 00:14:49.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:49.434 "listen_address": { 00:14:49.435 "trtype": "TCP", 00:14:49.435 "adrfam": "IPv4", 00:14:49.435 "traddr": "10.0.0.2", 00:14:49.435 "trsvcid": "4420" 00:14:49.435 }, 00:14:49.435 "peer_address": { 00:14:49.435 "trtype": "TCP", 00:14:49.435 "adrfam": "IPv4", 00:14:49.435 "traddr": "10.0.0.1", 00:14:49.435 "trsvcid": "59210" 00:14:49.435 }, 00:14:49.435 "auth": { 00:14:49.435 "state": "completed", 00:14:49.435 "digest": "sha256", 00:14:49.435 "dhgroup": "ffdhe6144" 00:14:49.435 } 00:14:49.435 } 00:14:49.435 ]' 00:14:49.435 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.435 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.435 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.435 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:49.435 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.435 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.435 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.435 04:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.692 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:14:49.692 04:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:14:50.625 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.625 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:50.625 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.625 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.625 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.625 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.625 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:50.625 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:50.883 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:14:50.883 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.883 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:50.883 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:50.883 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:50.883 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.883 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.883 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.883 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.883 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.883 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.883 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.883 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.448 00:14:51.448 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.448 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.448 04:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.706 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.706 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.706 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.706 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.706 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.706 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.706 { 00:14:51.706 "cntlid": 37, 00:14:51.706 "qid": 0, 00:14:51.706 "state": "enabled", 00:14:51.706 "thread": "nvmf_tgt_poll_group_000", 00:14:51.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:51.706 "listen_address": { 00:14:51.706 "trtype": "TCP", 00:14:51.706 "adrfam": "IPv4", 00:14:51.706 "traddr": "10.0.0.2", 00:14:51.706 "trsvcid": "4420" 00:14:51.706 }, 00:14:51.706 "peer_address": { 00:14:51.706 "trtype": "TCP", 00:14:51.706 "adrfam": "IPv4", 00:14:51.706 "traddr": "10.0.0.1", 00:14:51.706 "trsvcid": "59244" 00:14:51.706 }, 00:14:51.706 "auth": { 00:14:51.706 "state": "completed", 00:14:51.706 "digest": "sha256", 00:14:51.706 "dhgroup": "ffdhe6144" 00:14:51.706 } 00:14:51.706 } 00:14:51.706 ]' 00:14:51.706 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.706 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:51.706 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.706 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:51.706 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.963 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.963 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.963 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.221 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:14:52.221 04:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:14:53.170 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.170 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:53.170 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.170 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.170 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.170 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.170 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:53.170 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:53.170 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:14:53.170 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.170 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:53.170 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:53.170 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:53.170 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.170 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:53.170 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.170 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.170 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.170 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:53.170 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:53.170 04:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:53.734 00:14:53.734 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.734 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.734 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.991 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.991 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.991 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.992 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.248 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.248 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.248 { 00:14:54.248 "cntlid": 39, 00:14:54.248 "qid": 0, 00:14:54.248 "state": "enabled", 00:14:54.248 "thread": "nvmf_tgt_poll_group_000", 00:14:54.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:54.248 "listen_address": { 00:14:54.248 "trtype": "TCP", 00:14:54.248 "adrfam": "IPv4", 00:14:54.248 "traddr": "10.0.0.2", 00:14:54.248 "trsvcid": "4420" 00:14:54.248 }, 00:14:54.248 "peer_address": { 00:14:54.248 "trtype": "TCP", 00:14:54.248 "adrfam": "IPv4", 00:14:54.248 "traddr": "10.0.0.1", 00:14:54.248 "trsvcid": "59266" 00:14:54.248 }, 00:14:54.248 "auth": { 00:14:54.248 "state": "completed", 00:14:54.248 "digest": "sha256", 00:14:54.248 "dhgroup": "ffdhe6144" 00:14:54.248 } 00:14:54.248 } 00:14:54.248 ]' 00:14:54.248 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.248 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.248 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.248 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:54.248 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.248 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.248 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.248 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.505 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:14:54.505 04:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:14:55.437 04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.437 04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:55.437 04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.437 04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.437 04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.437 04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:55.437 04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.437 04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:55.437 04:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:55.694 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:14:55.694 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.694 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:55.694 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:55.694 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:55.694 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.695 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.695 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.695 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.695 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.695 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.695 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.695 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:56.635 00:14:56.635 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.635 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.636 04:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.893 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.893 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.893 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.893 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.893 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.893 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.893 { 00:14:56.893 "cntlid": 41, 00:14:56.893 "qid": 0, 00:14:56.893 "state": "enabled", 00:14:56.893 "thread": "nvmf_tgt_poll_group_000", 00:14:56.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:56.893 "listen_address": { 00:14:56.893 "trtype": "TCP", 00:14:56.893 "adrfam": "IPv4", 00:14:56.893 "traddr": "10.0.0.2", 00:14:56.893 "trsvcid": "4420" 00:14:56.893 }, 00:14:56.893 "peer_address": { 00:14:56.893 "trtype": "TCP", 00:14:56.893 "adrfam": "IPv4", 00:14:56.893 "traddr": "10.0.0.1", 00:14:56.893 "trsvcid": "59296" 00:14:56.893 }, 00:14:56.893 "auth": { 00:14:56.893 "state": "completed", 00:14:56.893 "digest": "sha256", 00:14:56.893 "dhgroup": "ffdhe8192" 00:14:56.893 } 00:14:56.893 } 00:14:56.893 ]' 00:14:56.893 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.893 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:56.893 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.893 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:56.893 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.893 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.893 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.893 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.151 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:14:57.151 04:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:14:58.084 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.084 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:58.084 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.084 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.084 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.084 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.084 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:58.084 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:58.650 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:14:58.650 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.650 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:58.650 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:58.650 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:58.650 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.650 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.650 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.650 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.650 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.650 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.650 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:58.650 04:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:59.215 00:14:59.215 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:59.215 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:59.215 04:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.472 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.472 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.472 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.472 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.472 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.472 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:59.472 { 00:14:59.472 "cntlid": 43, 00:14:59.472 "qid": 0, 00:14:59.472 "state": "enabled", 00:14:59.472 "thread": "nvmf_tgt_poll_group_000", 00:14:59.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:14:59.472 "listen_address": { 00:14:59.472 "trtype": "TCP", 00:14:59.472 "adrfam": "IPv4", 00:14:59.472 "traddr": "10.0.0.2", 00:14:59.472 "trsvcid": "4420" 00:14:59.472 }, 00:14:59.473 "peer_address": { 00:14:59.473 "trtype": "TCP", 00:14:59.473 "adrfam": "IPv4", 00:14:59.473 "traddr": "10.0.0.1", 00:14:59.473 "trsvcid": "57598" 00:14:59.473 }, 00:14:59.473 "auth": { 00:14:59.473 "state": "completed", 00:14:59.473 "digest": "sha256", 00:14:59.473 "dhgroup": "ffdhe8192" 00:14:59.473 } 00:14:59.473 } 00:14:59.473 ]' 00:14:59.473 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:59.730 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:59.730 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:59.730 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:59.730 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:59.730 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.730 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.730 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.988 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:14:59.988 04:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:15:00.920 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.920 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.920 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.920 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.920 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.920 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:00.920 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:00.920 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:01.178 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:01.178 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.178 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:01.178 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:01.178 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:01.178 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.178 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.178 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.178 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.178 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.178 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.178 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:01.178 04:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.117 00:15:02.117 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.117 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.117 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.374 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.374 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.374 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.374 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.374 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.374 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.374 { 00:15:02.374 "cntlid": 45, 00:15:02.374 "qid": 0, 00:15:02.374 "state": "enabled", 00:15:02.374 "thread": "nvmf_tgt_poll_group_000", 00:15:02.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:02.374 "listen_address": { 00:15:02.374 "trtype": "TCP", 00:15:02.374 "adrfam": "IPv4", 00:15:02.374 "traddr": "10.0.0.2", 00:15:02.374 "trsvcid": "4420" 00:15:02.374 }, 00:15:02.374 "peer_address": { 00:15:02.374 "trtype": "TCP", 00:15:02.374 "adrfam": "IPv4", 00:15:02.374 "traddr": "10.0.0.1", 00:15:02.374 "trsvcid": "57614" 00:15:02.374 }, 00:15:02.374 "auth": { 00:15:02.374 "state": "completed", 00:15:02.374 "digest": "sha256", 00:15:02.374 "dhgroup": "ffdhe8192" 00:15:02.374 } 00:15:02.374 } 00:15:02.374 ]' 00:15:02.374 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.374 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.374 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.374 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:02.374 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.374 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.374 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.374 04:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.631 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:15:02.631 04:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:15:03.561 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.561 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.561 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:03.562 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.562 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.562 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.562 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.562 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:03.562 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:03.819 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:03.819 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.819 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:03.819 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:03.819 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:03.819 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.819 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:03.819 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.819 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.819 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.819 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:03.819 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:03.819 04:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:04.750 00:15:04.750 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.750 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.750 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.007 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.007 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.007 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.007 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.007 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.007 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.007 { 00:15:05.007 "cntlid": 47, 00:15:05.007 "qid": 0, 00:15:05.007 "state": "enabled", 00:15:05.007 "thread": "nvmf_tgt_poll_group_000", 00:15:05.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:05.007 "listen_address": { 00:15:05.007 "trtype": "TCP", 00:15:05.007 "adrfam": "IPv4", 00:15:05.007 "traddr": "10.0.0.2", 00:15:05.007 "trsvcid": "4420" 00:15:05.007 }, 00:15:05.007 "peer_address": { 00:15:05.007 "trtype": "TCP", 00:15:05.007 "adrfam": "IPv4", 00:15:05.007 "traddr": "10.0.0.1", 00:15:05.007 "trsvcid": "57636" 00:15:05.007 }, 00:15:05.007 "auth": { 00:15:05.007 "state": "completed", 00:15:05.007 "digest": "sha256", 00:15:05.007 "dhgroup": "ffdhe8192" 00:15:05.007 } 00:15:05.007 } 00:15:05.007 ]' 00:15:05.007 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.007 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.008 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.008 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:05.008 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.008 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.008 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.008 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.573 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:15:05.573 04:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:15:06.508 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.508 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:06.508 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.508 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.508 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.508 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:06.508 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:06.508 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.508 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:06.508 04:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:06.508 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:06.508 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.508 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:06.508 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:06.508 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:06.508 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.508 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.508 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.508 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.508 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.508 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.508 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.508 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.072 00:15:07.072 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.072 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.072 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.329 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.329 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.329 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.329 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.329 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.329 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.329 { 00:15:07.329 "cntlid": 49, 00:15:07.329 "qid": 0, 00:15:07.329 "state": "enabled", 00:15:07.329 "thread": "nvmf_tgt_poll_group_000", 00:15:07.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:07.329 "listen_address": { 00:15:07.329 "trtype": "TCP", 00:15:07.329 "adrfam": "IPv4", 00:15:07.329 "traddr": "10.0.0.2", 00:15:07.329 "trsvcid": "4420" 00:15:07.329 }, 00:15:07.329 "peer_address": { 00:15:07.329 "trtype": "TCP", 00:15:07.329 "adrfam": "IPv4", 00:15:07.329 "traddr": "10.0.0.1", 00:15:07.329 "trsvcid": "57676" 00:15:07.329 }, 00:15:07.329 "auth": { 00:15:07.329 "state": "completed", 00:15:07.329 "digest": "sha384", 00:15:07.329 "dhgroup": "null" 00:15:07.329 } 00:15:07.329 } 00:15:07.329 ]' 00:15:07.329 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.329 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:07.329 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.329 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:07.329 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.329 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.329 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.329 04:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.586 04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:15:07.586 04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:15:08.519 04:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.519 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.519 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.519 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.519 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.519 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.519 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:08.519 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:08.778 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:08.778 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.778 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:08.778 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:08.778 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:08.778 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.778 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.778 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.778 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.778 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.778 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.778 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.778 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.345 00:15:09.345 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.345 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.345 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.345 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.345 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.345 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.345 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.345 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.345 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.345 { 00:15:09.345 "cntlid": 51, 00:15:09.345 "qid": 0, 00:15:09.345 "state": "enabled", 00:15:09.345 "thread": "nvmf_tgt_poll_group_000", 00:15:09.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:09.345 "listen_address": { 00:15:09.345 "trtype": "TCP", 00:15:09.345 "adrfam": "IPv4", 00:15:09.345 "traddr": "10.0.0.2", 00:15:09.345 "trsvcid": "4420" 00:15:09.345 }, 00:15:09.345 "peer_address": { 00:15:09.345 "trtype": "TCP", 00:15:09.345 "adrfam": "IPv4", 00:15:09.345 "traddr": "10.0.0.1", 00:15:09.345 "trsvcid": "52930" 00:15:09.345 }, 00:15:09.345 "auth": { 00:15:09.345 "state": "completed", 00:15:09.345 "digest": "sha384", 00:15:09.345 "dhgroup": "null" 00:15:09.345 } 00:15:09.345 } 00:15:09.345 ]' 00:15:09.345 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.604 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:09.604 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.604 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:09.604 04:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.604 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.604 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.604 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.862 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:15:09.862 04:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:15:10.808 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.808 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.808 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.808 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.808 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.808 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.808 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:10.808 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:11.064 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:11.064 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.064 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:11.064 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:11.064 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:11.064 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.064 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.064 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.064 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.064 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.064 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.064 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.064 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.319 00:15:11.319 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.319 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.319 04:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.575 04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.575 04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.575 04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.575 04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.832 04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.832 04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.832 { 00:15:11.832 "cntlid": 53, 00:15:11.832 "qid": 0, 00:15:11.832 "state": "enabled", 00:15:11.832 "thread": "nvmf_tgt_poll_group_000", 00:15:11.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:11.832 "listen_address": { 00:15:11.832 "trtype": "TCP", 00:15:11.832 "adrfam": "IPv4", 00:15:11.832 "traddr": "10.0.0.2", 00:15:11.832 "trsvcid": "4420" 00:15:11.832 }, 00:15:11.832 "peer_address": { 00:15:11.832 "trtype": "TCP", 00:15:11.832 "adrfam": "IPv4", 00:15:11.832 "traddr": "10.0.0.1", 00:15:11.832 "trsvcid": "52956" 00:15:11.832 }, 00:15:11.832 "auth": { 00:15:11.832 "state": "completed", 00:15:11.832 "digest": "sha384", 00:15:11.832 "dhgroup": "null" 00:15:11.832 } 00:15:11.832 } 00:15:11.832 ]' 00:15:11.832 04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.832 04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:11.832 04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.832 04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:11.832 04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.832 04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.832 04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.832 04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.088 04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:15:12.089 04:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:15:13.018 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.018 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:13.018 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.018 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.018 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.018 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:13.018 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:13.018 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:13.274 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:13.274 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.274 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:13.274 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:13.274 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:13.274 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.274 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:13.274 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.274 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.274 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.274 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:13.274 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.274 04:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.531 00:15:13.531 04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.531 04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.531 04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.094 04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.094 04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.094 04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.094 04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.094 04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.094 04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:14.094 { 00:15:14.094 "cntlid": 55, 00:15:14.094 "qid": 0, 00:15:14.094 "state": "enabled", 00:15:14.094 "thread": "nvmf_tgt_poll_group_000", 00:15:14.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:14.094 "listen_address": { 00:15:14.094 "trtype": "TCP", 00:15:14.094 "adrfam": "IPv4", 00:15:14.094 "traddr": "10.0.0.2", 00:15:14.094 "trsvcid": "4420" 00:15:14.094 }, 00:15:14.094 "peer_address": { 00:15:14.094 "trtype": "TCP", 00:15:14.094 "adrfam": "IPv4", 00:15:14.094 "traddr": "10.0.0.1", 00:15:14.094 "trsvcid": "52982" 00:15:14.094 }, 00:15:14.094 "auth": { 00:15:14.094 "state": "completed", 00:15:14.094 "digest": "sha384", 00:15:14.094 "dhgroup": "null" 00:15:14.094 } 00:15:14.094 } 00:15:14.094 ]' 00:15:14.094 04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:14.094 04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.094 04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:14.094 04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:14.094 04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:14.094 04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.094 04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.094 04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.352 04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:15:14.352 04:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:15:15.285 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.285 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:15.285 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.285 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.285 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.285 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:15.285 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.285 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:15.285 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:15.543 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:15.543 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.543 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:15.543 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:15.543 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:15.543 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.543 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.543 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.543 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.543 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.543 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.543 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.543 04:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.801 00:15:15.801 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.801 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.801 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.060 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.060 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.060 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.060 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.060 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.060 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:16.060 { 00:15:16.060 "cntlid": 57, 00:15:16.060 "qid": 0, 00:15:16.060 "state": "enabled", 00:15:16.060 "thread": "nvmf_tgt_poll_group_000", 00:15:16.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:16.060 "listen_address": { 00:15:16.060 "trtype": "TCP", 00:15:16.060 "adrfam": "IPv4", 00:15:16.060 "traddr": "10.0.0.2", 00:15:16.060 "trsvcid": "4420" 00:15:16.060 }, 00:15:16.060 "peer_address": { 00:15:16.060 "trtype": "TCP", 00:15:16.060 "adrfam": "IPv4", 00:15:16.060 "traddr": "10.0.0.1", 00:15:16.060 "trsvcid": "52998" 00:15:16.060 }, 00:15:16.060 "auth": { 00:15:16.060 "state": "completed", 00:15:16.060 "digest": "sha384", 00:15:16.060 "dhgroup": "ffdhe2048" 00:15:16.060 } 00:15:16.060 } 00:15:16.060 ]' 00:15:16.060 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:16.318 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.318 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:16.318 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:16.318 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:16.318 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.318 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.318 04:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.576 04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:15:16.576 04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:15:17.513 04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.513 04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.513 04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.513 04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.513 04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.513 04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.513 04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:17.513 04:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:17.770 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:17.770 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.770 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:17.770 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:17.770 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:17.770 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.770 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.770 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.770 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.770 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.770 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.771 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.771 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:18.028 00:15:18.028 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.028 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.028 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.285 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.286 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.286 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.286 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.286 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.286 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.286 { 00:15:18.286 "cntlid": 59, 00:15:18.286 "qid": 0, 00:15:18.286 "state": "enabled", 00:15:18.286 "thread": "nvmf_tgt_poll_group_000", 00:15:18.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:18.286 "listen_address": { 00:15:18.286 "trtype": "TCP", 00:15:18.286 "adrfam": "IPv4", 00:15:18.286 "traddr": "10.0.0.2", 00:15:18.286 "trsvcid": "4420" 00:15:18.286 }, 00:15:18.286 "peer_address": { 00:15:18.286 "trtype": "TCP", 00:15:18.286 "adrfam": "IPv4", 00:15:18.286 "traddr": "10.0.0.1", 00:15:18.286 "trsvcid": "55560" 00:15:18.286 }, 00:15:18.286 "auth": { 00:15:18.286 "state": "completed", 00:15:18.286 "digest": "sha384", 00:15:18.286 "dhgroup": "ffdhe2048" 00:15:18.286 } 00:15:18.286 } 00:15:18.286 ]' 00:15:18.286 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.543 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.543 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.543 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:18.543 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.543 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.543 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.543 04:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.800 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:15:18.800 04:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:15:19.733 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.733 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:19.733 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.733 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.733 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.733 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.733 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:19.733 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:19.990 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:19.990 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.990 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:19.990 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:19.990 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:19.990 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.990 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.990 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.990 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.990 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.990 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.990 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.990 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:20.247 00:15:20.247 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.247 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.247 04:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.504 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.504 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.504 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.505 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.762 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.762 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.762 { 00:15:20.762 "cntlid": 61, 00:15:20.762 "qid": 0, 00:15:20.763 "state": "enabled", 00:15:20.763 "thread": "nvmf_tgt_poll_group_000", 00:15:20.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:20.763 "listen_address": { 00:15:20.763 "trtype": "TCP", 00:15:20.763 "adrfam": "IPv4", 00:15:20.763 "traddr": "10.0.0.2", 00:15:20.763 "trsvcid": "4420" 00:15:20.763 }, 00:15:20.763 "peer_address": { 00:15:20.763 "trtype": "TCP", 00:15:20.763 "adrfam": "IPv4", 00:15:20.763 "traddr": "10.0.0.1", 00:15:20.763 "trsvcid": "55596" 00:15:20.763 }, 00:15:20.763 "auth": { 00:15:20.763 "state": "completed", 00:15:20.763 "digest": "sha384", 00:15:20.763 "dhgroup": "ffdhe2048" 00:15:20.763 } 00:15:20.763 } 00:15:20.763 ]' 00:15:20.763 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.763 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.763 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.763 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:20.763 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.763 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.763 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.763 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.020 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:15:21.020 04:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:15:21.975 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.975 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:21.975 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.975 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.975 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.975 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.975 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:21.975 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:22.232 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:22.232 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:22.232 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:22.232 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:22.232 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:22.232 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.232 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:22.232 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.232 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.232 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.232 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:22.232 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.232 04:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:22.490 00:15:22.748 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.748 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.748 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:23.004 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:23.004 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:23.004 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.004 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.004 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.004 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:23.004 { 00:15:23.004 "cntlid": 63, 00:15:23.004 "qid": 0, 00:15:23.004 "state": "enabled", 00:15:23.004 "thread": "nvmf_tgt_poll_group_000", 00:15:23.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:23.004 "listen_address": { 00:15:23.004 "trtype": "TCP", 00:15:23.004 "adrfam": "IPv4", 00:15:23.004 "traddr": "10.0.0.2", 00:15:23.004 "trsvcid": "4420" 00:15:23.004 }, 00:15:23.004 "peer_address": { 00:15:23.004 "trtype": "TCP", 00:15:23.004 "adrfam": "IPv4", 00:15:23.004 "traddr": "10.0.0.1", 00:15:23.004 "trsvcid": "55632" 00:15:23.004 }, 00:15:23.004 "auth": { 00:15:23.004 "state": "completed", 00:15:23.004 "digest": "sha384", 00:15:23.004 "dhgroup": "ffdhe2048" 00:15:23.004 } 00:15:23.004 } 00:15:23.004 ]' 00:15:23.004 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:23.004 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:23.004 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:23.004 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:23.004 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:23.004 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:23.004 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:23.004 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.261 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:15:23.261 04:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:15:24.194 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.194 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:24.194 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.194 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.194 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.194 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:24.194 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:24.194 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:24.194 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:24.451 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:24.451 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:24.451 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:24.451 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:24.451 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:24.451 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.451 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.451 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.451 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.451 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.451 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.451 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.451 04:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:25.018 00:15:25.018 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.018 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.018 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.275 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.275 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.275 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.275 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.275 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.275 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:25.275 { 00:15:25.275 "cntlid": 65, 00:15:25.275 "qid": 0, 00:15:25.275 "state": "enabled", 00:15:25.275 "thread": "nvmf_tgt_poll_group_000", 00:15:25.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:25.275 "listen_address": { 00:15:25.275 "trtype": "TCP", 00:15:25.275 "adrfam": "IPv4", 00:15:25.275 "traddr": "10.0.0.2", 00:15:25.275 "trsvcid": "4420" 00:15:25.275 }, 00:15:25.275 "peer_address": { 00:15:25.275 "trtype": "TCP", 00:15:25.275 "adrfam": "IPv4", 00:15:25.275 "traddr": "10.0.0.1", 00:15:25.275 "trsvcid": "55648" 00:15:25.275 }, 00:15:25.275 "auth": { 00:15:25.275 "state": "completed", 00:15:25.275 "digest": "sha384", 00:15:25.275 "dhgroup": "ffdhe3072" 00:15:25.275 } 00:15:25.275 } 00:15:25.275 ]' 00:15:25.275 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:25.275 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:25.275 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:25.275 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:25.275 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:25.275 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.275 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.275 04:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.533 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:15:25.533 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:15:26.465 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.465 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:26.465 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.465 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.465 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.465 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.465 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:26.465 04:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:26.722 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:26.722 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.722 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:26.722 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:26.722 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:26.722 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.723 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.723 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.723 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.723 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.723 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.723 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.723 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:27.286 00:15:27.286 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.286 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.286 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.286 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.286 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.286 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.286 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.286 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.286 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.286 { 00:15:27.286 "cntlid": 67, 00:15:27.286 "qid": 0, 00:15:27.286 "state": "enabled", 00:15:27.286 "thread": "nvmf_tgt_poll_group_000", 00:15:27.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:27.286 "listen_address": { 00:15:27.286 "trtype": "TCP", 00:15:27.286 "adrfam": "IPv4", 00:15:27.286 "traddr": "10.0.0.2", 00:15:27.286 "trsvcid": "4420" 00:15:27.286 }, 00:15:27.287 "peer_address": { 00:15:27.287 "trtype": "TCP", 00:15:27.287 "adrfam": "IPv4", 00:15:27.287 "traddr": "10.0.0.1", 00:15:27.287 "trsvcid": "54410" 00:15:27.287 }, 00:15:27.287 "auth": { 00:15:27.287 "state": "completed", 00:15:27.287 "digest": "sha384", 00:15:27.287 "dhgroup": "ffdhe3072" 00:15:27.287 } 00:15:27.287 } 00:15:27.287 ]' 00:15:27.287 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.545 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:27.545 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.545 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:27.545 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.545 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.545 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.545 04:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.802 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:15:27.802 04:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:15:28.734 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.734 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:28.734 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.734 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.734 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.734 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.734 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:28.734 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:28.992 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:28.992 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.992 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:28.992 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:28.992 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:28.992 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.992 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.992 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.992 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.992 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.992 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.992 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.992 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:29.250 00:15:29.250 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:29.250 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.250 04:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:29.508 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.508 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.508 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.508 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.508 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.508 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.508 { 00:15:29.508 "cntlid": 69, 00:15:29.508 "qid": 0, 00:15:29.508 "state": "enabled", 00:15:29.508 "thread": "nvmf_tgt_poll_group_000", 00:15:29.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:29.508 "listen_address": { 00:15:29.508 "trtype": "TCP", 00:15:29.508 "adrfam": "IPv4", 00:15:29.508 "traddr": "10.0.0.2", 00:15:29.508 "trsvcid": "4420" 00:15:29.508 }, 00:15:29.508 "peer_address": { 00:15:29.508 "trtype": "TCP", 00:15:29.508 "adrfam": "IPv4", 00:15:29.508 "traddr": "10.0.0.1", 00:15:29.508 "trsvcid": "54436" 00:15:29.508 }, 00:15:29.508 "auth": { 00:15:29.508 "state": "completed", 00:15:29.508 "digest": "sha384", 00:15:29.508 "dhgroup": "ffdhe3072" 00:15:29.508 } 00:15:29.508 } 00:15:29.508 ]' 00:15:29.508 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.508 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.508 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.766 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:29.766 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.766 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.766 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.766 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.022 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:15:30.022 04:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:15:30.952 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.952 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:30.952 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.952 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.952 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.952 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.952 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:30.952 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:31.208 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:31.208 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.208 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:31.208 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:31.208 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:31.208 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.208 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:31.208 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.208 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.208 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.208 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:31.208 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.208 04:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.465 00:15:31.465 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.465 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.465 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.722 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.722 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.722 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.722 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.979 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.979 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.979 { 00:15:31.979 "cntlid": 71, 00:15:31.979 "qid": 0, 00:15:31.979 "state": "enabled", 00:15:31.979 "thread": "nvmf_tgt_poll_group_000", 00:15:31.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:31.979 "listen_address": { 00:15:31.979 "trtype": "TCP", 00:15:31.979 "adrfam": "IPv4", 00:15:31.979 "traddr": "10.0.0.2", 00:15:31.979 "trsvcid": "4420" 00:15:31.979 }, 00:15:31.979 "peer_address": { 00:15:31.979 "trtype": "TCP", 00:15:31.979 "adrfam": "IPv4", 00:15:31.979 "traddr": "10.0.0.1", 00:15:31.979 "trsvcid": "54470" 00:15:31.979 }, 00:15:31.979 "auth": { 00:15:31.979 "state": "completed", 00:15:31.979 "digest": "sha384", 00:15:31.979 "dhgroup": "ffdhe3072" 00:15:31.979 } 00:15:31.979 } 00:15:31.979 ]' 00:15:31.979 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.980 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.980 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.980 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:31.980 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.980 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.980 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.980 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.236 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:15:32.236 04:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:15:33.169 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.169 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.169 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:33.169 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.169 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.169 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.169 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.169 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.169 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:33.169 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:33.426 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:33.426 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.426 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:33.426 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:33.426 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:33.426 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.426 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.427 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.427 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.427 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.427 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.427 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.427 04:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.992 00:15:33.992 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.992 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.992 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:34.250 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.250 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.250 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.250 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.250 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.250 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:34.250 { 00:15:34.250 "cntlid": 73, 00:15:34.250 "qid": 0, 00:15:34.250 "state": "enabled", 00:15:34.250 "thread": "nvmf_tgt_poll_group_000", 00:15:34.250 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:34.250 "listen_address": { 00:15:34.250 "trtype": "TCP", 00:15:34.250 "adrfam": "IPv4", 00:15:34.250 "traddr": "10.0.0.2", 00:15:34.250 "trsvcid": "4420" 00:15:34.250 }, 00:15:34.250 "peer_address": { 00:15:34.250 "trtype": "TCP", 00:15:34.250 "adrfam": "IPv4", 00:15:34.250 "traddr": "10.0.0.1", 00:15:34.250 "trsvcid": "54502" 00:15:34.250 }, 00:15:34.250 "auth": { 00:15:34.250 "state": "completed", 00:15:34.250 "digest": "sha384", 00:15:34.250 "dhgroup": "ffdhe4096" 00:15:34.250 } 00:15:34.250 } 00:15:34.250 ]' 00:15:34.250 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.250 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.250 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.250 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:34.250 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.250 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.250 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.250 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.509 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:15:34.509 04:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:15:35.443 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.443 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.443 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:35.443 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.443 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.443 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.443 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:35.443 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:35.443 04:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:35.702 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:35.702 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.702 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:35.702 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:35.702 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:35.702 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.702 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.702 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.702 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.702 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.702 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.702 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.702 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:36.286 00:15:36.286 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.286 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.286 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.286 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.286 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.286 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.286 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.544 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.544 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.544 { 00:15:36.544 "cntlid": 75, 00:15:36.544 "qid": 0, 00:15:36.544 "state": "enabled", 00:15:36.544 "thread": "nvmf_tgt_poll_group_000", 00:15:36.544 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:36.544 "listen_address": { 00:15:36.544 "trtype": "TCP", 00:15:36.544 "adrfam": "IPv4", 00:15:36.544 "traddr": "10.0.0.2", 00:15:36.544 "trsvcid": "4420" 00:15:36.544 }, 00:15:36.544 "peer_address": { 00:15:36.544 "trtype": "TCP", 00:15:36.544 "adrfam": "IPv4", 00:15:36.544 "traddr": "10.0.0.1", 00:15:36.544 "trsvcid": "54544" 00:15:36.544 }, 00:15:36.544 "auth": { 00:15:36.544 "state": "completed", 00:15:36.544 "digest": "sha384", 00:15:36.544 "dhgroup": "ffdhe4096" 00:15:36.544 } 00:15:36.544 } 00:15:36.544 ]' 00:15:36.544 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.544 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.544 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.544 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:36.544 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.544 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.544 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.544 04:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.801 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:15:36.801 04:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:15:37.734 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.734 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:37.734 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.734 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.734 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.734 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.734 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:37.734 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:37.993 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:37.993 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.993 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:37.993 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:37.993 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:37.993 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.993 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.993 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.993 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.993 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.993 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.993 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.993 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.559 00:15:38.559 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.559 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.559 04:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.816 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.816 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.816 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.816 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.816 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.816 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.816 { 00:15:38.816 "cntlid": 77, 00:15:38.816 "qid": 0, 00:15:38.816 "state": "enabled", 00:15:38.816 "thread": "nvmf_tgt_poll_group_000", 00:15:38.816 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:38.816 "listen_address": { 00:15:38.816 "trtype": "TCP", 00:15:38.816 "adrfam": "IPv4", 00:15:38.816 "traddr": "10.0.0.2", 00:15:38.816 "trsvcid": "4420" 00:15:38.816 }, 00:15:38.816 "peer_address": { 00:15:38.816 "trtype": "TCP", 00:15:38.816 "adrfam": "IPv4", 00:15:38.816 "traddr": "10.0.0.1", 00:15:38.816 "trsvcid": "35198" 00:15:38.816 }, 00:15:38.816 "auth": { 00:15:38.816 "state": "completed", 00:15:38.816 "digest": "sha384", 00:15:38.816 "dhgroup": "ffdhe4096" 00:15:38.816 } 00:15:38.816 } 00:15:38.816 ]' 00:15:38.816 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.816 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.816 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.816 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:38.816 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.816 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.816 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.816 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.073 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:15:39.073 04:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:15:40.003 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.003 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:40.003 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.003 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.003 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.003 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:40.003 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:40.003 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:40.261 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:40.261 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.261 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:40.261 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:40.261 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:40.261 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.261 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:40.261 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.261 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.261 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.261 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:40.261 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.261 04:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.824 00:15:40.824 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.824 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.824 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.081 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.081 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.081 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.081 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.081 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.081 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.081 { 00:15:41.081 "cntlid": 79, 00:15:41.081 "qid": 0, 00:15:41.081 "state": "enabled", 00:15:41.081 "thread": "nvmf_tgt_poll_group_000", 00:15:41.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:41.081 "listen_address": { 00:15:41.081 "trtype": "TCP", 00:15:41.081 "adrfam": "IPv4", 00:15:41.081 "traddr": "10.0.0.2", 00:15:41.081 "trsvcid": "4420" 00:15:41.081 }, 00:15:41.081 "peer_address": { 00:15:41.081 "trtype": "TCP", 00:15:41.081 "adrfam": "IPv4", 00:15:41.081 "traddr": "10.0.0.1", 00:15:41.081 "trsvcid": "35222" 00:15:41.081 }, 00:15:41.081 "auth": { 00:15:41.081 "state": "completed", 00:15:41.081 "digest": "sha384", 00:15:41.081 "dhgroup": "ffdhe4096" 00:15:41.081 } 00:15:41.081 } 00:15:41.081 ]' 00:15:41.081 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.081 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.081 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:41.081 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:41.081 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:41.081 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.081 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.081 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.339 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:15:41.339 04:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:15:42.270 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.270 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:42.270 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.270 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.270 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.270 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.270 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:42.270 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:42.270 04:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:42.527 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:42.527 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.527 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:42.527 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:42.527 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:42.527 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.527 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.527 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.527 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.527 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.527 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.527 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.527 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.092 00:15:43.092 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.092 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.092 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.349 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.349 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.349 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.349 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.350 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.350 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.350 { 00:15:43.350 "cntlid": 81, 00:15:43.350 "qid": 0, 00:15:43.350 "state": "enabled", 00:15:43.350 "thread": "nvmf_tgt_poll_group_000", 00:15:43.350 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:43.350 "listen_address": { 00:15:43.350 "trtype": "TCP", 00:15:43.350 "adrfam": "IPv4", 00:15:43.350 "traddr": "10.0.0.2", 00:15:43.350 "trsvcid": "4420" 00:15:43.350 }, 00:15:43.350 "peer_address": { 00:15:43.350 "trtype": "TCP", 00:15:43.350 "adrfam": "IPv4", 00:15:43.350 "traddr": "10.0.0.1", 00:15:43.350 "trsvcid": "35248" 00:15:43.350 }, 00:15:43.350 "auth": { 00:15:43.350 "state": "completed", 00:15:43.350 "digest": "sha384", 00:15:43.350 "dhgroup": "ffdhe6144" 00:15:43.350 } 00:15:43.350 } 00:15:43.350 ]' 00:15:43.350 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.350 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.350 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.350 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:43.350 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.607 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.607 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.607 04:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.865 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:15:43.865 04:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:15:44.798 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.798 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:44.798 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.798 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.798 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.798 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.798 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:44.799 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:45.055 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:45.055 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.055 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:45.055 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:45.055 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:45.055 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.055 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.055 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.055 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.055 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.055 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.055 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.055 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.619 00:15:45.619 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.619 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.619 04:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.877 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.877 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.877 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.877 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.877 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.877 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.877 { 00:15:45.877 "cntlid": 83, 00:15:45.877 "qid": 0, 00:15:45.877 "state": "enabled", 00:15:45.877 "thread": "nvmf_tgt_poll_group_000", 00:15:45.877 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:45.877 "listen_address": { 00:15:45.877 "trtype": "TCP", 00:15:45.877 "adrfam": "IPv4", 00:15:45.877 "traddr": "10.0.0.2", 00:15:45.877 "trsvcid": "4420" 00:15:45.877 }, 00:15:45.877 "peer_address": { 00:15:45.877 "trtype": "TCP", 00:15:45.877 "adrfam": "IPv4", 00:15:45.877 "traddr": "10.0.0.1", 00:15:45.877 "trsvcid": "35266" 00:15:45.877 }, 00:15:45.877 "auth": { 00:15:45.877 "state": "completed", 00:15:45.877 "digest": "sha384", 00:15:45.877 "dhgroup": "ffdhe6144" 00:15:45.877 } 00:15:45.877 } 00:15:45.877 ]' 00:15:45.877 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.877 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.877 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.877 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:45.877 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.877 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.877 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.877 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.135 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:15:46.135 04:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:15:47.068 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.068 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.068 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.068 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.068 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.068 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.068 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:47.068 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:47.325 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:47.325 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.325 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:47.325 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:47.325 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:47.325 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.325 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.325 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.325 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.325 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.325 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.325 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.325 04:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.889 00:15:47.889 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.889 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.889 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.146 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.146 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.146 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.146 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.146 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.146 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.146 { 00:15:48.146 "cntlid": 85, 00:15:48.146 "qid": 0, 00:15:48.146 "state": "enabled", 00:15:48.146 "thread": "nvmf_tgt_poll_group_000", 00:15:48.146 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:48.146 "listen_address": { 00:15:48.146 "trtype": "TCP", 00:15:48.146 "adrfam": "IPv4", 00:15:48.146 "traddr": "10.0.0.2", 00:15:48.146 "trsvcid": "4420" 00:15:48.146 }, 00:15:48.146 "peer_address": { 00:15:48.146 "trtype": "TCP", 00:15:48.146 "adrfam": "IPv4", 00:15:48.146 "traddr": "10.0.0.1", 00:15:48.146 "trsvcid": "43934" 00:15:48.146 }, 00:15:48.146 "auth": { 00:15:48.146 "state": "completed", 00:15:48.146 "digest": "sha384", 00:15:48.146 "dhgroup": "ffdhe6144" 00:15:48.146 } 00:15:48.146 } 00:15:48.146 ]' 00:15:48.146 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.146 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.146 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.146 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:48.146 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.146 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.146 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.146 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.710 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:15:48.710 04:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:15:49.273 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.530 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:49.530 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.530 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.530 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.530 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.530 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:49.530 04:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:49.839 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:49.839 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.839 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:49.839 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:49.839 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:49.839 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.839 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:49.840 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.840 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.840 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.840 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:49.840 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:49.840 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:50.096 00:15:50.352 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.352 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.352 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.608 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.608 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.608 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.608 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.608 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.608 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.608 { 00:15:50.608 "cntlid": 87, 00:15:50.608 "qid": 0, 00:15:50.608 "state": "enabled", 00:15:50.608 "thread": "nvmf_tgt_poll_group_000", 00:15:50.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:50.608 "listen_address": { 00:15:50.608 "trtype": "TCP", 00:15:50.608 "adrfam": "IPv4", 00:15:50.608 "traddr": "10.0.0.2", 00:15:50.608 "trsvcid": "4420" 00:15:50.608 }, 00:15:50.608 "peer_address": { 00:15:50.608 "trtype": "TCP", 00:15:50.608 "adrfam": "IPv4", 00:15:50.608 "traddr": "10.0.0.1", 00:15:50.608 "trsvcid": "43968" 00:15:50.608 }, 00:15:50.608 "auth": { 00:15:50.608 "state": "completed", 00:15:50.608 "digest": "sha384", 00:15:50.608 "dhgroup": "ffdhe6144" 00:15:50.608 } 00:15:50.608 } 00:15:50.608 ]' 00:15:50.608 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.608 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.608 04:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.608 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:50.608 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.608 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.608 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.609 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.865 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:15:50.865 04:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:15:51.795 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.796 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:51.796 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.796 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.796 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.796 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:51.796 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.796 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:51.796 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:52.053 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:52.053 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.053 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:52.053 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:52.053 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:52.053 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.053 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.053 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.053 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.053 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.053 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.053 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.053 04:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.985 00:15:52.985 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.985 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.985 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.243 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.243 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.243 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.243 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.243 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.243 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.243 { 00:15:53.243 "cntlid": 89, 00:15:53.243 "qid": 0, 00:15:53.243 "state": "enabled", 00:15:53.243 "thread": "nvmf_tgt_poll_group_000", 00:15:53.243 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:53.244 "listen_address": { 00:15:53.244 "trtype": "TCP", 00:15:53.244 "adrfam": "IPv4", 00:15:53.244 "traddr": "10.0.0.2", 00:15:53.244 "trsvcid": "4420" 00:15:53.244 }, 00:15:53.244 "peer_address": { 00:15:53.244 "trtype": "TCP", 00:15:53.244 "adrfam": "IPv4", 00:15:53.244 "traddr": "10.0.0.1", 00:15:53.244 "trsvcid": "43996" 00:15:53.244 }, 00:15:53.244 "auth": { 00:15:53.244 "state": "completed", 00:15:53.244 "digest": "sha384", 00:15:53.244 "dhgroup": "ffdhe8192" 00:15:53.244 } 00:15:53.244 } 00:15:53.244 ]' 00:15:53.244 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.244 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.244 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.244 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:53.244 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.244 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.244 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.244 04:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.502 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:15:53.502 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:15:54.435 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.435 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:54.435 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.435 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.435 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.435 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.435 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:54.435 04:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:54.693 04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:15:54.693 04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.693 04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:54.693 04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:54.693 04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:54.693 04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.693 04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.693 04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.693 04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.693 04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.693 04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.693 04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.693 04:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.625 00:15:55.625 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.625 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.625 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.883 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.883 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.883 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.883 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.883 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.883 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.883 { 00:15:55.883 "cntlid": 91, 00:15:55.883 "qid": 0, 00:15:55.883 "state": "enabled", 00:15:55.883 "thread": "nvmf_tgt_poll_group_000", 00:15:55.883 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:55.883 "listen_address": { 00:15:55.883 "trtype": "TCP", 00:15:55.883 "adrfam": "IPv4", 00:15:55.883 "traddr": "10.0.0.2", 00:15:55.883 "trsvcid": "4420" 00:15:55.883 }, 00:15:55.883 "peer_address": { 00:15:55.883 "trtype": "TCP", 00:15:55.883 "adrfam": "IPv4", 00:15:55.883 "traddr": "10.0.0.1", 00:15:55.883 "trsvcid": "44038" 00:15:55.883 }, 00:15:55.883 "auth": { 00:15:55.883 "state": "completed", 00:15:55.883 "digest": "sha384", 00:15:55.883 "dhgroup": "ffdhe8192" 00:15:55.883 } 00:15:55.883 } 00:15:55.883 ]' 00:15:55.883 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.883 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.883 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.883 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:55.883 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.883 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.883 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.883 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.140 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:15:56.140 04:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:15:57.071 04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.071 04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.071 04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.071 04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.071 04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.071 04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:57.071 04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:57.071 04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:57.328 04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:15:57.328 04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:57.328 04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:57.328 04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:57.328 04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:57.328 04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.328 04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.328 04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.328 04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.328 04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.328 04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.328 04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.328 04:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:58.261 00:15:58.261 04:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.261 04:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.261 04:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.518 04:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.518 04:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.518 04:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.518 04:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.518 04:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.518 04:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.518 { 00:15:58.518 "cntlid": 93, 00:15:58.518 "qid": 0, 00:15:58.518 "state": "enabled", 00:15:58.518 "thread": "nvmf_tgt_poll_group_000", 00:15:58.518 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:15:58.518 "listen_address": { 00:15:58.518 "trtype": "TCP", 00:15:58.518 "adrfam": "IPv4", 00:15:58.518 "traddr": "10.0.0.2", 00:15:58.518 "trsvcid": "4420" 00:15:58.518 }, 00:15:58.518 "peer_address": { 00:15:58.518 "trtype": "TCP", 00:15:58.518 "adrfam": "IPv4", 00:15:58.518 "traddr": "10.0.0.1", 00:15:58.518 "trsvcid": "50648" 00:15:58.518 }, 00:15:58.518 "auth": { 00:15:58.518 "state": "completed", 00:15:58.518 "digest": "sha384", 00:15:58.518 "dhgroup": "ffdhe8192" 00:15:58.518 } 00:15:58.518 } 00:15:58.518 ]' 00:15:58.518 04:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.518 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.518 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.518 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:58.518 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.775 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.775 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.775 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.033 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:15:59.033 04:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:15:59.963 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.963 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:59.963 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.963 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.963 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.963 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.963 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:59.963 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:00.219 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:00.219 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:00.219 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:00.219 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:00.219 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:00.219 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.219 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:00.219 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.219 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.219 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.219 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:00.219 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:00.219 04:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.149 00:16:01.149 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.149 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.149 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.149 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.149 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.149 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.149 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.149 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.149 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.149 { 00:16:01.149 "cntlid": 95, 00:16:01.149 "qid": 0, 00:16:01.149 "state": "enabled", 00:16:01.149 "thread": "nvmf_tgt_poll_group_000", 00:16:01.149 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:01.149 "listen_address": { 00:16:01.149 "trtype": "TCP", 00:16:01.149 "adrfam": "IPv4", 00:16:01.149 "traddr": "10.0.0.2", 00:16:01.149 "trsvcid": "4420" 00:16:01.149 }, 00:16:01.149 "peer_address": { 00:16:01.149 "trtype": "TCP", 00:16:01.149 "adrfam": "IPv4", 00:16:01.149 "traddr": "10.0.0.1", 00:16:01.149 "trsvcid": "50678" 00:16:01.149 }, 00:16:01.149 "auth": { 00:16:01.149 "state": "completed", 00:16:01.149 "digest": "sha384", 00:16:01.149 "dhgroup": "ffdhe8192" 00:16:01.149 } 00:16:01.149 } 00:16:01.149 ]' 00:16:01.149 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:01.149 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:01.149 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:01.406 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:01.406 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:01.406 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.406 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.406 04:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.665 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:16:01.665 04:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:16:02.600 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.600 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.600 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:02.600 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.600 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.600 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.600 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:02.600 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:02.600 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.600 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:02.600 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:02.859 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:02.859 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.859 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:02.859 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:02.859 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:02.859 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.859 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.859 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.859 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.859 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.859 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.859 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.859 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.163 00:16:03.436 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.436 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.436 04:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.708 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.708 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.708 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.708 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.708 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.708 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.708 { 00:16:03.708 "cntlid": 97, 00:16:03.708 "qid": 0, 00:16:03.708 "state": "enabled", 00:16:03.708 "thread": "nvmf_tgt_poll_group_000", 00:16:03.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:03.708 "listen_address": { 00:16:03.708 "trtype": "TCP", 00:16:03.708 "adrfam": "IPv4", 00:16:03.708 "traddr": "10.0.0.2", 00:16:03.708 "trsvcid": "4420" 00:16:03.708 }, 00:16:03.708 "peer_address": { 00:16:03.708 "trtype": "TCP", 00:16:03.708 "adrfam": "IPv4", 00:16:03.708 "traddr": "10.0.0.1", 00:16:03.708 "trsvcid": "50702" 00:16:03.709 }, 00:16:03.709 "auth": { 00:16:03.709 "state": "completed", 00:16:03.709 "digest": "sha512", 00:16:03.709 "dhgroup": "null" 00:16:03.709 } 00:16:03.709 } 00:16:03.709 ]' 00:16:03.709 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.709 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:03.709 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.709 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:03.709 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.709 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.709 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.709 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.984 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:16:03.984 04:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:16:04.973 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.973 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.973 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.973 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.973 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.973 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.973 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.974 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:04.974 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:05.247 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:05.247 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.247 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:05.247 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:05.247 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:05.247 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.247 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.247 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.247 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.247 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.247 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.247 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.247 04:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.522 00:16:05.797 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.797 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.797 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.073 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.073 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.073 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.073 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.073 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.073 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.073 { 00:16:06.073 "cntlid": 99, 00:16:06.073 "qid": 0, 00:16:06.073 "state": "enabled", 00:16:06.073 "thread": "nvmf_tgt_poll_group_000", 00:16:06.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:06.073 "listen_address": { 00:16:06.073 "trtype": "TCP", 00:16:06.073 "adrfam": "IPv4", 00:16:06.073 "traddr": "10.0.0.2", 00:16:06.073 "trsvcid": "4420" 00:16:06.073 }, 00:16:06.073 "peer_address": { 00:16:06.073 "trtype": "TCP", 00:16:06.073 "adrfam": "IPv4", 00:16:06.073 "traddr": "10.0.0.1", 00:16:06.073 "trsvcid": "50726" 00:16:06.073 }, 00:16:06.073 "auth": { 00:16:06.073 "state": "completed", 00:16:06.073 "digest": "sha512", 00:16:06.073 "dhgroup": "null" 00:16:06.073 } 00:16:06.073 } 00:16:06.073 ]' 00:16:06.073 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.073 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.073 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.073 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:06.073 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.073 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.073 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.073 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.345 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:16:06.345 04:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:16:07.334 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.334 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:07.334 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.334 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.334 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.334 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.334 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:07.334 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:07.616 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:07.616 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.616 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:07.617 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:07.617 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:07.617 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.617 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.617 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.617 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.617 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.617 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.617 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.617 04:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.888 00:16:07.888 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.888 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.888 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.163 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.163 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.163 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.163 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.163 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.163 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.163 { 00:16:08.163 "cntlid": 101, 00:16:08.163 "qid": 0, 00:16:08.163 "state": "enabled", 00:16:08.163 "thread": "nvmf_tgt_poll_group_000", 00:16:08.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:08.163 "listen_address": { 00:16:08.163 "trtype": "TCP", 00:16:08.163 "adrfam": "IPv4", 00:16:08.163 "traddr": "10.0.0.2", 00:16:08.163 "trsvcid": "4420" 00:16:08.163 }, 00:16:08.163 "peer_address": { 00:16:08.163 "trtype": "TCP", 00:16:08.163 "adrfam": "IPv4", 00:16:08.163 "traddr": "10.0.0.1", 00:16:08.163 "trsvcid": "57920" 00:16:08.163 }, 00:16:08.163 "auth": { 00:16:08.163 "state": "completed", 00:16:08.163 "digest": "sha512", 00:16:08.163 "dhgroup": "null" 00:16:08.163 } 00:16:08.163 } 00:16:08.163 ]' 00:16:08.163 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.163 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:08.163 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.163 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:08.163 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.163 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.163 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.163 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.450 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:16:08.450 04:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:16:09.384 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.384 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:09.384 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.384 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.384 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.384 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.384 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:09.384 04:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:09.642 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:09.642 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.642 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:09.642 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:09.642 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:09.642 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.642 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:09.642 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.642 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.642 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.642 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:09.642 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:09.642 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:09.899 00:16:09.899 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.899 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.899 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.156 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.156 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.156 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.157 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.157 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.157 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.157 { 00:16:10.157 "cntlid": 103, 00:16:10.157 "qid": 0, 00:16:10.157 "state": "enabled", 00:16:10.157 "thread": "nvmf_tgt_poll_group_000", 00:16:10.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:10.157 "listen_address": { 00:16:10.157 "trtype": "TCP", 00:16:10.157 "adrfam": "IPv4", 00:16:10.157 "traddr": "10.0.0.2", 00:16:10.157 "trsvcid": "4420" 00:16:10.157 }, 00:16:10.157 "peer_address": { 00:16:10.157 "trtype": "TCP", 00:16:10.157 "adrfam": "IPv4", 00:16:10.157 "traddr": "10.0.0.1", 00:16:10.157 "trsvcid": "57942" 00:16:10.157 }, 00:16:10.157 "auth": { 00:16:10.157 "state": "completed", 00:16:10.157 "digest": "sha512", 00:16:10.157 "dhgroup": "null" 00:16:10.157 } 00:16:10.157 } 00:16:10.157 ]' 00:16:10.157 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.413 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.413 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.413 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:10.413 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.413 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.413 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.413 04:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.670 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:16:10.670 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:16:11.603 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.603 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:11.603 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.603 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.603 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.603 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.603 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.603 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:11.603 04:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:11.862 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:11.862 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.862 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:11.862 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:11.862 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:11.862 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.862 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.862 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.862 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.862 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.862 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.862 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.862 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.120 00:16:12.120 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.120 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.120 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.378 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.378 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.378 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.378 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.378 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.378 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.378 { 00:16:12.378 "cntlid": 105, 00:16:12.378 "qid": 0, 00:16:12.378 "state": "enabled", 00:16:12.378 "thread": "nvmf_tgt_poll_group_000", 00:16:12.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:12.378 "listen_address": { 00:16:12.378 "trtype": "TCP", 00:16:12.378 "adrfam": "IPv4", 00:16:12.378 "traddr": "10.0.0.2", 00:16:12.378 "trsvcid": "4420" 00:16:12.378 }, 00:16:12.378 "peer_address": { 00:16:12.378 "trtype": "TCP", 00:16:12.378 "adrfam": "IPv4", 00:16:12.378 "traddr": "10.0.0.1", 00:16:12.378 "trsvcid": "57968" 00:16:12.378 }, 00:16:12.378 "auth": { 00:16:12.378 "state": "completed", 00:16:12.378 "digest": "sha512", 00:16:12.378 "dhgroup": "ffdhe2048" 00:16:12.378 } 00:16:12.378 } 00:16:12.378 ]' 00:16:12.378 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.636 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.636 04:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.636 04:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:12.636 04:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.636 04:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.636 04:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.636 04:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.894 04:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:16:12.894 04:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:16:13.828 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.828 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.828 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.828 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.828 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.828 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.828 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:13.828 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:14.086 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:14.086 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.086 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:14.086 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:14.086 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:14.086 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.086 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.086 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.086 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.086 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.086 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.086 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.086 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:14.345 00:16:14.345 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.345 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.345 04:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.603 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.603 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.603 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.603 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.603 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.603 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.603 { 00:16:14.603 "cntlid": 107, 00:16:14.603 "qid": 0, 00:16:14.603 "state": "enabled", 00:16:14.603 "thread": "nvmf_tgt_poll_group_000", 00:16:14.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:14.603 "listen_address": { 00:16:14.603 "trtype": "TCP", 00:16:14.603 "adrfam": "IPv4", 00:16:14.603 "traddr": "10.0.0.2", 00:16:14.603 "trsvcid": "4420" 00:16:14.603 }, 00:16:14.603 "peer_address": { 00:16:14.603 "trtype": "TCP", 00:16:14.603 "adrfam": "IPv4", 00:16:14.603 "traddr": "10.0.0.1", 00:16:14.603 "trsvcid": "57988" 00:16:14.603 }, 00:16:14.603 "auth": { 00:16:14.603 "state": "completed", 00:16:14.603 "digest": "sha512", 00:16:14.603 "dhgroup": "ffdhe2048" 00:16:14.603 } 00:16:14.603 } 00:16:14.603 ]' 00:16:14.603 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.861 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.861 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.861 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:14.861 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.861 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.861 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.861 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.120 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:16:15.120 04:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:16:16.054 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.055 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.055 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.055 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.055 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.055 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.055 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.055 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:16.055 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:16.313 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:16.313 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.313 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:16.313 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:16.313 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:16.314 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.314 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.314 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.314 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.314 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.314 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.314 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.314 04:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.582 00:16:16.582 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.582 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.582 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.838 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.838 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.839 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.839 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.839 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.839 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.839 { 00:16:16.839 "cntlid": 109, 00:16:16.839 "qid": 0, 00:16:16.839 "state": "enabled", 00:16:16.839 "thread": "nvmf_tgt_poll_group_000", 00:16:16.839 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:16.839 "listen_address": { 00:16:16.839 "trtype": "TCP", 00:16:16.839 "adrfam": "IPv4", 00:16:16.839 "traddr": "10.0.0.2", 00:16:16.839 "trsvcid": "4420" 00:16:16.839 }, 00:16:16.839 "peer_address": { 00:16:16.839 "trtype": "TCP", 00:16:16.839 "adrfam": "IPv4", 00:16:16.839 "traddr": "10.0.0.1", 00:16:16.839 "trsvcid": "58022" 00:16:16.839 }, 00:16:16.839 "auth": { 00:16:16.839 "state": "completed", 00:16:16.839 "digest": "sha512", 00:16:16.839 "dhgroup": "ffdhe2048" 00:16:16.839 } 00:16:16.839 } 00:16:16.839 ]' 00:16:16.839 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.839 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:16.839 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.095 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:17.095 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.095 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.095 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.095 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.352 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:16:17.352 04:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:16:18.284 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.284 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.284 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:18.284 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.284 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.284 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.284 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.284 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:18.284 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:18.540 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:18.540 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.540 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:18.540 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:18.540 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:18.540 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.540 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:18.540 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.540 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.540 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.540 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:18.540 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.540 04:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.813 00:16:18.813 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.813 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.813 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.069 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.070 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.070 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.070 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.070 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.070 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.070 { 00:16:19.070 "cntlid": 111, 00:16:19.070 "qid": 0, 00:16:19.070 "state": "enabled", 00:16:19.070 "thread": "nvmf_tgt_poll_group_000", 00:16:19.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:19.070 "listen_address": { 00:16:19.070 "trtype": "TCP", 00:16:19.070 "adrfam": "IPv4", 00:16:19.070 "traddr": "10.0.0.2", 00:16:19.070 "trsvcid": "4420" 00:16:19.070 }, 00:16:19.070 "peer_address": { 00:16:19.070 "trtype": "TCP", 00:16:19.070 "adrfam": "IPv4", 00:16:19.070 "traddr": "10.0.0.1", 00:16:19.070 "trsvcid": "38166" 00:16:19.070 }, 00:16:19.070 "auth": { 00:16:19.070 "state": "completed", 00:16:19.070 "digest": "sha512", 00:16:19.070 "dhgroup": "ffdhe2048" 00:16:19.070 } 00:16:19.070 } 00:16:19.070 ]' 00:16:19.070 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.070 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.326 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.326 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:19.326 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.326 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.326 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.326 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.583 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:16:19.583 04:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:16:20.514 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.514 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.514 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.514 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.514 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.514 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:20.514 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.515 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:20.515 04:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:20.772 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:20.772 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.772 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:20.772 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:20.772 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:20.772 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.772 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.772 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.772 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.772 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.772 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.772 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.773 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:21.030 00:16:21.030 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.030 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.030 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.289 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.289 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.289 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.289 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.289 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.289 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.289 { 00:16:21.289 "cntlid": 113, 00:16:21.289 "qid": 0, 00:16:21.289 "state": "enabled", 00:16:21.289 "thread": "nvmf_tgt_poll_group_000", 00:16:21.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:21.289 "listen_address": { 00:16:21.289 "trtype": "TCP", 00:16:21.289 "adrfam": "IPv4", 00:16:21.289 "traddr": "10.0.0.2", 00:16:21.289 "trsvcid": "4420" 00:16:21.289 }, 00:16:21.289 "peer_address": { 00:16:21.289 "trtype": "TCP", 00:16:21.289 "adrfam": "IPv4", 00:16:21.289 "traddr": "10.0.0.1", 00:16:21.289 "trsvcid": "38198" 00:16:21.289 }, 00:16:21.289 "auth": { 00:16:21.289 "state": "completed", 00:16:21.289 "digest": "sha512", 00:16:21.289 "dhgroup": "ffdhe3072" 00:16:21.289 } 00:16:21.289 } 00:16:21.289 ]' 00:16:21.289 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.548 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.548 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.548 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:21.548 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.548 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.548 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.548 04:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.809 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:16:21.809 04:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:16:22.745 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.745 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:22.745 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.745 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.745 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.745 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.745 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:22.745 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:23.003 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:23.003 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.003 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:23.003 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:23.003 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:23.003 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.003 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.003 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.003 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.003 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.003 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.003 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.003 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.262 00:16:23.262 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.262 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.262 04:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.828 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.828 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.828 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.828 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.828 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.828 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.828 { 00:16:23.828 "cntlid": 115, 00:16:23.828 "qid": 0, 00:16:23.828 "state": "enabled", 00:16:23.828 "thread": "nvmf_tgt_poll_group_000", 00:16:23.828 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:23.828 "listen_address": { 00:16:23.828 "trtype": "TCP", 00:16:23.828 "adrfam": "IPv4", 00:16:23.828 "traddr": "10.0.0.2", 00:16:23.828 "trsvcid": "4420" 00:16:23.828 }, 00:16:23.828 "peer_address": { 00:16:23.828 "trtype": "TCP", 00:16:23.828 "adrfam": "IPv4", 00:16:23.828 "traddr": "10.0.0.1", 00:16:23.828 "trsvcid": "38240" 00:16:23.828 }, 00:16:23.828 "auth": { 00:16:23.828 "state": "completed", 00:16:23.828 "digest": "sha512", 00:16:23.828 "dhgroup": "ffdhe3072" 00:16:23.828 } 00:16:23.828 } 00:16:23.828 ]' 00:16:23.828 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.828 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.828 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.828 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:23.828 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.828 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.828 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.828 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.087 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:16:24.087 04:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:16:25.021 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.021 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:25.021 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.021 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.022 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.022 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.022 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:25.022 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:25.280 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:25.280 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.280 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:25.280 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:25.280 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:25.280 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.280 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.280 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.280 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.280 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.280 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.280 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.280 04:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.538 00:16:25.538 04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.538 04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.538 04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.797 04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.797 04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.797 04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.797 04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.797 04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.797 04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.797 { 00:16:25.797 "cntlid": 117, 00:16:25.797 "qid": 0, 00:16:25.797 "state": "enabled", 00:16:25.797 "thread": "nvmf_tgt_poll_group_000", 00:16:25.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:25.797 "listen_address": { 00:16:25.797 "trtype": "TCP", 00:16:25.797 "adrfam": "IPv4", 00:16:25.797 "traddr": "10.0.0.2", 00:16:25.797 "trsvcid": "4420" 00:16:25.797 }, 00:16:25.797 "peer_address": { 00:16:25.797 "trtype": "TCP", 00:16:25.797 "adrfam": "IPv4", 00:16:25.797 "traddr": "10.0.0.1", 00:16:25.797 "trsvcid": "38262" 00:16:25.797 }, 00:16:25.797 "auth": { 00:16:25.797 "state": "completed", 00:16:25.797 "digest": "sha512", 00:16:25.797 "dhgroup": "ffdhe3072" 00:16:25.797 } 00:16:25.797 } 00:16:25.797 ]' 00:16:25.797 04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.055 04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.055 04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.055 04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:26.055 04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.055 04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.055 04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.055 04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.312 04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:16:26.312 04:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:16:27.243 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.243 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.243 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:27.243 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.243 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.243 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.243 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.243 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:27.243 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:27.501 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:27.501 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.501 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:27.501 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:27.501 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:27.501 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.501 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:27.501 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.501 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.501 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.501 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:27.501 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.501 04:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.759 00:16:27.759 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.759 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.759 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.017 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.017 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.017 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.017 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.017 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.017 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.017 { 00:16:28.017 "cntlid": 119, 00:16:28.017 "qid": 0, 00:16:28.017 "state": "enabled", 00:16:28.017 "thread": "nvmf_tgt_poll_group_000", 00:16:28.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:28.017 "listen_address": { 00:16:28.017 "trtype": "TCP", 00:16:28.017 "adrfam": "IPv4", 00:16:28.017 "traddr": "10.0.0.2", 00:16:28.017 "trsvcid": "4420" 00:16:28.017 }, 00:16:28.017 "peer_address": { 00:16:28.017 "trtype": "TCP", 00:16:28.017 "adrfam": "IPv4", 00:16:28.017 "traddr": "10.0.0.1", 00:16:28.017 "trsvcid": "53244" 00:16:28.017 }, 00:16:28.017 "auth": { 00:16:28.017 "state": "completed", 00:16:28.017 "digest": "sha512", 00:16:28.017 "dhgroup": "ffdhe3072" 00:16:28.017 } 00:16:28.017 } 00:16:28.017 ]' 00:16:28.017 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.274 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.274 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.274 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:28.275 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.275 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.275 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.275 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.531 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:16:28.531 04:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:16:29.462 04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.462 04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:29.462 04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.462 04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.462 04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.462 04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.462 04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.462 04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.462 04:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.719 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:29.719 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.719 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.719 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:29.719 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:29.719 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.719 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.719 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.719 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.719 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.719 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.719 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.719 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.976 00:16:29.976 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.976 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.976 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.234 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.234 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.234 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.234 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.234 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.234 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.234 { 00:16:30.234 "cntlid": 121, 00:16:30.234 "qid": 0, 00:16:30.234 "state": "enabled", 00:16:30.234 "thread": "nvmf_tgt_poll_group_000", 00:16:30.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:30.234 "listen_address": { 00:16:30.234 "trtype": "TCP", 00:16:30.234 "adrfam": "IPv4", 00:16:30.234 "traddr": "10.0.0.2", 00:16:30.234 "trsvcid": "4420" 00:16:30.234 }, 00:16:30.234 "peer_address": { 00:16:30.234 "trtype": "TCP", 00:16:30.234 "adrfam": "IPv4", 00:16:30.234 "traddr": "10.0.0.1", 00:16:30.234 "trsvcid": "53272" 00:16:30.234 }, 00:16:30.234 "auth": { 00:16:30.234 "state": "completed", 00:16:30.234 "digest": "sha512", 00:16:30.234 "dhgroup": "ffdhe4096" 00:16:30.234 } 00:16:30.234 } 00:16:30.234 ]' 00:16:30.234 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.492 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.492 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.492 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.492 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.492 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.492 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.492 04:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.750 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:16:30.750 04:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:16:31.689 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.689 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.689 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.689 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.689 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.689 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.689 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:31.689 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:31.947 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:31.947 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.947 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.947 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:31.947 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:31.947 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.947 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.947 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.947 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.947 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.947 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.948 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.948 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:32.514 00:16:32.514 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.514 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.514 04:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.514 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.514 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.514 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.514 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.772 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.772 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.772 { 00:16:32.772 "cntlid": 123, 00:16:32.772 "qid": 0, 00:16:32.772 "state": "enabled", 00:16:32.772 "thread": "nvmf_tgt_poll_group_000", 00:16:32.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:32.772 "listen_address": { 00:16:32.772 "trtype": "TCP", 00:16:32.772 "adrfam": "IPv4", 00:16:32.772 "traddr": "10.0.0.2", 00:16:32.772 "trsvcid": "4420" 00:16:32.772 }, 00:16:32.772 "peer_address": { 00:16:32.772 "trtype": "TCP", 00:16:32.772 "adrfam": "IPv4", 00:16:32.772 "traddr": "10.0.0.1", 00:16:32.772 "trsvcid": "53310" 00:16:32.772 }, 00:16:32.772 "auth": { 00:16:32.772 "state": "completed", 00:16:32.772 "digest": "sha512", 00:16:32.772 "dhgroup": "ffdhe4096" 00:16:32.772 } 00:16:32.772 } 00:16:32.772 ]' 00:16:32.772 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.772 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.772 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.772 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:32.772 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.772 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.772 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.772 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.035 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:16:33.035 04:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:16:33.972 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.972 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.972 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.972 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.972 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.972 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.972 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.972 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:33.972 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:34.230 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:34.230 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.230 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:34.230 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:34.230 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:34.230 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.230 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.230 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.230 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.230 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.230 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.230 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.230 04:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:34.796 00:16:34.796 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.797 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.797 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.055 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.055 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.055 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.055 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.055 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.055 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.055 { 00:16:35.055 "cntlid": 125, 00:16:35.055 "qid": 0, 00:16:35.055 "state": "enabled", 00:16:35.055 "thread": "nvmf_tgt_poll_group_000", 00:16:35.055 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:35.055 "listen_address": { 00:16:35.055 "trtype": "TCP", 00:16:35.055 "adrfam": "IPv4", 00:16:35.055 "traddr": "10.0.0.2", 00:16:35.055 "trsvcid": "4420" 00:16:35.055 }, 00:16:35.055 "peer_address": { 00:16:35.055 "trtype": "TCP", 00:16:35.055 "adrfam": "IPv4", 00:16:35.055 "traddr": "10.0.0.1", 00:16:35.055 "trsvcid": "53336" 00:16:35.055 }, 00:16:35.055 "auth": { 00:16:35.055 "state": "completed", 00:16:35.055 "digest": "sha512", 00:16:35.055 "dhgroup": "ffdhe4096" 00:16:35.055 } 00:16:35.055 } 00:16:35.055 ]' 00:16:35.055 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.055 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.055 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.055 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:35.055 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.055 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.055 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.055 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.313 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:16:35.313 04:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:16:36.247 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:36.247 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:36.247 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:36.247 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.247 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.247 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.247 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:36.247 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:36.247 04:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:36.506 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:36.506 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.506 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.506 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:36.506 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:36.506 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.506 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:36.506 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.506 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.506 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.506 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:36.506 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:36.506 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.072 00:16:37.072 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.072 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.072 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.329 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.329 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.329 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.329 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.329 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.329 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.329 { 00:16:37.329 "cntlid": 127, 00:16:37.329 "qid": 0, 00:16:37.329 "state": "enabled", 00:16:37.329 "thread": "nvmf_tgt_poll_group_000", 00:16:37.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:37.329 "listen_address": { 00:16:37.329 "trtype": "TCP", 00:16:37.329 "adrfam": "IPv4", 00:16:37.329 "traddr": "10.0.0.2", 00:16:37.329 "trsvcid": "4420" 00:16:37.329 }, 00:16:37.329 "peer_address": { 00:16:37.329 "trtype": "TCP", 00:16:37.329 "adrfam": "IPv4", 00:16:37.329 "traddr": "10.0.0.1", 00:16:37.329 "trsvcid": "53348" 00:16:37.329 }, 00:16:37.329 "auth": { 00:16:37.329 "state": "completed", 00:16:37.329 "digest": "sha512", 00:16:37.329 "dhgroup": "ffdhe4096" 00:16:37.329 } 00:16:37.329 } 00:16:37.329 ]' 00:16:37.329 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.329 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:37.329 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.329 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:37.329 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.329 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.329 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.329 04:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.894 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:16:37.894 04:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:16:38.826 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.827 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:38.827 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.827 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.827 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.827 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.827 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.827 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:38.827 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:38.827 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:38.827 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.827 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:38.827 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:38.827 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:38.827 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.827 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.827 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.827 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.827 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.827 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.827 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.827 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.392 00:16:39.650 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.650 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.650 04:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.907 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.907 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.907 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.907 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.907 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.907 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.907 { 00:16:39.907 "cntlid": 129, 00:16:39.907 "qid": 0, 00:16:39.907 "state": "enabled", 00:16:39.907 "thread": "nvmf_tgt_poll_group_000", 00:16:39.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:39.907 "listen_address": { 00:16:39.907 "trtype": "TCP", 00:16:39.907 "adrfam": "IPv4", 00:16:39.907 "traddr": "10.0.0.2", 00:16:39.907 "trsvcid": "4420" 00:16:39.907 }, 00:16:39.907 "peer_address": { 00:16:39.907 "trtype": "TCP", 00:16:39.907 "adrfam": "IPv4", 00:16:39.907 "traddr": "10.0.0.1", 00:16:39.907 "trsvcid": "47534" 00:16:39.907 }, 00:16:39.907 "auth": { 00:16:39.907 "state": "completed", 00:16:39.907 "digest": "sha512", 00:16:39.907 "dhgroup": "ffdhe6144" 00:16:39.907 } 00:16:39.907 } 00:16:39.907 ]' 00:16:39.907 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.907 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.907 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.907 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:39.907 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.907 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.907 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.907 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.164 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:16:40.164 04:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:16:41.113 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.113 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.114 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.114 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.114 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.114 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.114 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:41.114 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:41.373 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:41.373 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.373 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.373 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:41.373 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.373 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.373 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.373 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.373 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.373 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.373 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.373 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.373 04:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.940 00:16:41.940 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.940 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.940 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.199 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.199 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.199 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.199 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.199 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.199 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.199 { 00:16:42.199 "cntlid": 131, 00:16:42.199 "qid": 0, 00:16:42.199 "state": "enabled", 00:16:42.199 "thread": "nvmf_tgt_poll_group_000", 00:16:42.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:42.199 "listen_address": { 00:16:42.199 "trtype": "TCP", 00:16:42.199 "adrfam": "IPv4", 00:16:42.199 "traddr": "10.0.0.2", 00:16:42.199 "trsvcid": "4420" 00:16:42.199 }, 00:16:42.199 "peer_address": { 00:16:42.199 "trtype": "TCP", 00:16:42.199 "adrfam": "IPv4", 00:16:42.199 "traddr": "10.0.0.1", 00:16:42.199 "trsvcid": "47556" 00:16:42.199 }, 00:16:42.199 "auth": { 00:16:42.199 "state": "completed", 00:16:42.199 "digest": "sha512", 00:16:42.199 "dhgroup": "ffdhe6144" 00:16:42.199 } 00:16:42.199 } 00:16:42.199 ]' 00:16:42.199 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.199 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.199 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.457 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:42.457 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.457 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.457 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.457 04:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.716 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:16:42.716 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:16:43.651 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.651 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.651 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.651 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.651 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.651 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.651 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:43.651 04:07:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:43.909 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:43.909 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.909 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.909 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:43.909 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:43.909 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.909 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.909 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.909 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.909 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.909 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.909 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.909 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.475 00:16:44.475 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.475 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.475 04:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.733 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.733 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.733 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.733 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.733 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.733 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.733 { 00:16:44.733 "cntlid": 133, 00:16:44.733 "qid": 0, 00:16:44.733 "state": "enabled", 00:16:44.733 "thread": "nvmf_tgt_poll_group_000", 00:16:44.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:44.733 "listen_address": { 00:16:44.733 "trtype": "TCP", 00:16:44.733 "adrfam": "IPv4", 00:16:44.733 "traddr": "10.0.0.2", 00:16:44.733 "trsvcid": "4420" 00:16:44.733 }, 00:16:44.733 "peer_address": { 00:16:44.733 "trtype": "TCP", 00:16:44.733 "adrfam": "IPv4", 00:16:44.733 "traddr": "10.0.0.1", 00:16:44.733 "trsvcid": "47586" 00:16:44.733 }, 00:16:44.733 "auth": { 00:16:44.733 "state": "completed", 00:16:44.733 "digest": "sha512", 00:16:44.733 "dhgroup": "ffdhe6144" 00:16:44.733 } 00:16:44.733 } 00:16:44.733 ]' 00:16:44.733 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.733 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.733 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.733 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:44.733 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.733 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.733 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.733 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.991 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:16:44.991 04:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:16:45.924 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.924 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:45.924 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.924 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.924 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.924 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.925 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:45.925 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:46.488 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:46.488 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.488 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.488 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:46.488 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:46.488 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.489 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:46.489 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.489 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.489 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.489 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.489 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.489 04:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.057 00:16:47.057 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.057 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.057 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.314 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.314 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.314 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.314 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.314 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.314 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.314 { 00:16:47.314 "cntlid": 135, 00:16:47.314 "qid": 0, 00:16:47.314 "state": "enabled", 00:16:47.314 "thread": "nvmf_tgt_poll_group_000", 00:16:47.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:47.314 "listen_address": { 00:16:47.314 "trtype": "TCP", 00:16:47.314 "adrfam": "IPv4", 00:16:47.314 "traddr": "10.0.0.2", 00:16:47.314 "trsvcid": "4420" 00:16:47.314 }, 00:16:47.314 "peer_address": { 00:16:47.314 "trtype": "TCP", 00:16:47.314 "adrfam": "IPv4", 00:16:47.314 "traddr": "10.0.0.1", 00:16:47.314 "trsvcid": "47626" 00:16:47.314 }, 00:16:47.314 "auth": { 00:16:47.314 "state": "completed", 00:16:47.314 "digest": "sha512", 00:16:47.314 "dhgroup": "ffdhe6144" 00:16:47.314 } 00:16:47.314 } 00:16:47.314 ]' 00:16:47.314 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.314 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.315 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.315 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:47.315 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.315 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.315 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.315 04:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.581 04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:16:47.581 04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:16:48.668 04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.668 04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.668 04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.668 04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.668 04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.668 04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.668 04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.668 04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:48.668 04:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:48.668 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:48.668 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.668 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:48.668 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:48.668 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:48.668 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.668 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.668 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.668 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.668 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.668 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.668 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.668 04:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.600 00:16:49.600 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.600 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.600 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.858 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.858 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.858 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.858 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.858 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.858 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.858 { 00:16:49.858 "cntlid": 137, 00:16:49.858 "qid": 0, 00:16:49.858 "state": "enabled", 00:16:49.858 "thread": "nvmf_tgt_poll_group_000", 00:16:49.858 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:49.858 "listen_address": { 00:16:49.858 "trtype": "TCP", 00:16:49.858 "adrfam": "IPv4", 00:16:49.858 "traddr": "10.0.0.2", 00:16:49.858 "trsvcid": "4420" 00:16:49.858 }, 00:16:49.858 "peer_address": { 00:16:49.858 "trtype": "TCP", 00:16:49.858 "adrfam": "IPv4", 00:16:49.858 "traddr": "10.0.0.1", 00:16:49.858 "trsvcid": "40622" 00:16:49.858 }, 00:16:49.858 "auth": { 00:16:49.858 "state": "completed", 00:16:49.858 "digest": "sha512", 00:16:49.858 "dhgroup": "ffdhe8192" 00:16:49.858 } 00:16:49.858 } 00:16:49.858 ]' 00:16:49.858 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.858 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.858 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.858 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:49.858 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.115 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.115 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.115 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.373 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:16:50.373 04:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:16:51.305 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.305 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.305 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.305 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.305 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.305 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.305 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:51.305 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:51.563 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:51.563 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.563 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:51.563 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:51.563 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:51.563 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.563 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.563 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.563 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.563 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.563 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.563 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.563 04:07:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.496 00:16:52.496 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.496 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.496 04:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.496 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.496 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.496 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.496 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.496 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.496 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.496 { 00:16:52.496 "cntlid": 139, 00:16:52.496 "qid": 0, 00:16:52.496 "state": "enabled", 00:16:52.496 "thread": "nvmf_tgt_poll_group_000", 00:16:52.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:52.496 "listen_address": { 00:16:52.496 "trtype": "TCP", 00:16:52.496 "adrfam": "IPv4", 00:16:52.496 "traddr": "10.0.0.2", 00:16:52.496 "trsvcid": "4420" 00:16:52.496 }, 00:16:52.496 "peer_address": { 00:16:52.496 "trtype": "TCP", 00:16:52.496 "adrfam": "IPv4", 00:16:52.496 "traddr": "10.0.0.1", 00:16:52.496 "trsvcid": "40646" 00:16:52.496 }, 00:16:52.496 "auth": { 00:16:52.496 "state": "completed", 00:16:52.496 "digest": "sha512", 00:16:52.496 "dhgroup": "ffdhe8192" 00:16:52.496 } 00:16:52.496 } 00:16:52.496 ]' 00:16:52.496 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.753 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.753 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.753 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:52.753 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.753 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.753 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.754 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.011 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:16:53.011 04:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: --dhchap-ctrl-secret DHHC-1:02:Y2M4Mzg4NTFlZDk1NGQ5OTczMTBjYjc2MDE4NGQwZGYxMTZlNDIxY2RjYmY1OGRly5HRhw==: 00:16:53.944 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.944 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.944 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.944 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.944 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.944 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.945 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:53.945 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:54.202 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:54.202 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.202 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:54.202 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:54.202 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:54.202 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.202 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.202 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.202 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.202 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.202 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.202 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.202 04:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:55.133 00:16:55.133 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.133 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.133 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.389 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.389 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:55.389 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.389 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.389 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.389 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:55.389 { 00:16:55.389 "cntlid": 141, 00:16:55.389 "qid": 0, 00:16:55.389 "state": "enabled", 00:16:55.389 "thread": "nvmf_tgt_poll_group_000", 00:16:55.389 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:55.389 "listen_address": { 00:16:55.389 "trtype": "TCP", 00:16:55.389 "adrfam": "IPv4", 00:16:55.389 "traddr": "10.0.0.2", 00:16:55.389 "trsvcid": "4420" 00:16:55.389 }, 00:16:55.389 "peer_address": { 00:16:55.389 "trtype": "TCP", 00:16:55.389 "adrfam": "IPv4", 00:16:55.389 "traddr": "10.0.0.1", 00:16:55.389 "trsvcid": "40668" 00:16:55.389 }, 00:16:55.389 "auth": { 00:16:55.389 "state": "completed", 00:16:55.389 "digest": "sha512", 00:16:55.389 "dhgroup": "ffdhe8192" 00:16:55.389 } 00:16:55.389 } 00:16:55.389 ]' 00:16:55.389 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:55.389 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:55.389 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:55.389 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:55.389 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:55.389 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:55.389 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:55.389 04:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.646 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:16:55.646 04:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:01:MzhjMDg2NjJhY2MzYzZlNzc5NmQ4MDdhNzJlZGIzZTPfqYMa: 00:16:56.576 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.576 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.576 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.576 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.576 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.576 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.576 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:56.576 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:56.832 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:56.832 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:56.832 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:56.832 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:56.832 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:56.832 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.832 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:56.832 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.832 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.832 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.832 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:56.832 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:56.832 04:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:57.760 00:16:57.760 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.760 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.760 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.018 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.018 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.018 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.018 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.018 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.018 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.018 { 00:16:58.018 "cntlid": 143, 00:16:58.018 "qid": 0, 00:16:58.018 "state": "enabled", 00:16:58.018 "thread": "nvmf_tgt_poll_group_000", 00:16:58.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:58.018 "listen_address": { 00:16:58.018 "trtype": "TCP", 00:16:58.018 "adrfam": "IPv4", 00:16:58.018 "traddr": "10.0.0.2", 00:16:58.018 "trsvcid": "4420" 00:16:58.018 }, 00:16:58.018 "peer_address": { 00:16:58.018 "trtype": "TCP", 00:16:58.018 "adrfam": "IPv4", 00:16:58.018 "traddr": "10.0.0.1", 00:16:58.018 "trsvcid": "43548" 00:16:58.018 }, 00:16:58.018 "auth": { 00:16:58.018 "state": "completed", 00:16:58.018 "digest": "sha512", 00:16:58.018 "dhgroup": "ffdhe8192" 00:16:58.018 } 00:16:58.018 } 00:16:58.018 ]' 00:16:58.018 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.018 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:58.018 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.018 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.018 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.018 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.018 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.018 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.583 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:16:58.583 04:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:16:59.149 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.407 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.407 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.407 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.407 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.407 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:59.407 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:59.407 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:59.407 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:59.407 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:59.407 04:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:59.664 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:59.664 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.664 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:59.664 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:59.664 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:59.664 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.664 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.664 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.664 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.664 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.664 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.664 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.664 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.597 00:17:00.597 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:00.597 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:00.597 04:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.597 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.597 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.597 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.597 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.597 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.597 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.597 { 00:17:00.597 "cntlid": 145, 00:17:00.597 "qid": 0, 00:17:00.597 "state": "enabled", 00:17:00.597 "thread": "nvmf_tgt_poll_group_000", 00:17:00.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:00.597 "listen_address": { 00:17:00.597 "trtype": "TCP", 00:17:00.597 "adrfam": "IPv4", 00:17:00.597 "traddr": "10.0.0.2", 00:17:00.597 "trsvcid": "4420" 00:17:00.597 }, 00:17:00.597 "peer_address": { 00:17:00.597 "trtype": "TCP", 00:17:00.597 "adrfam": "IPv4", 00:17:00.597 "traddr": "10.0.0.1", 00:17:00.597 "trsvcid": "43580" 00:17:00.597 }, 00:17:00.597 "auth": { 00:17:00.597 "state": "completed", 00:17:00.597 "digest": "sha512", 00:17:00.597 "dhgroup": "ffdhe8192" 00:17:00.597 } 00:17:00.598 } 00:17:00.598 ]' 00:17:00.598 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.598 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:00.598 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.856 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:00.856 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.856 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.856 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.856 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.113 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:17:01.113 04:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:NGU1NDYzNzc1ZTQ1NGQ5MjRjMzFlNmI1NDNlZGM3NjU4ODQ4ZWMxYTcxMjNkNDBj1w+E7A==: --dhchap-ctrl-secret DHHC-1:03:ZmE3NzIxMmYwMTgzZDIxYzE1NDI4YjZkZDgzZTE1YTEwODRkNWJiODNjODJjMWU0NDI1MTUxZjQ4MWEwOWE4M9nZ7wg=: 00:17:02.048 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.048 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.048 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.049 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.049 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.049 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:02.049 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.049 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.049 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.049 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:02.049 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:02.049 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:02.049 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:02.049 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.049 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:02.049 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.049 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:02.049 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:02.049 04:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:02.983 request: 00:17:02.983 { 00:17:02.983 "name": "nvme0", 00:17:02.983 "trtype": "tcp", 00:17:02.983 "traddr": "10.0.0.2", 00:17:02.983 "adrfam": "ipv4", 00:17:02.983 "trsvcid": "4420", 00:17:02.983 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:02.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:02.983 "prchk_reftag": false, 00:17:02.983 "prchk_guard": false, 00:17:02.983 "hdgst": false, 00:17:02.983 "ddgst": false, 00:17:02.983 "dhchap_key": "key2", 00:17:02.983 "allow_unrecognized_csi": false, 00:17:02.983 "method": "bdev_nvme_attach_controller", 00:17:02.983 "req_id": 1 00:17:02.983 } 00:17:02.983 Got JSON-RPC error response 00:17:02.983 response: 00:17:02.983 { 00:17:02.983 "code": -5, 00:17:02.983 "message": "Input/output error" 00:17:02.983 } 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:02.983 04:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:03.566 request: 00:17:03.566 { 00:17:03.566 "name": "nvme0", 00:17:03.566 "trtype": "tcp", 00:17:03.566 "traddr": "10.0.0.2", 00:17:03.566 "adrfam": "ipv4", 00:17:03.566 "trsvcid": "4420", 00:17:03.566 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:03.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:03.566 "prchk_reftag": false, 00:17:03.566 "prchk_guard": false, 00:17:03.566 "hdgst": false, 00:17:03.566 "ddgst": false, 00:17:03.566 "dhchap_key": "key1", 00:17:03.566 "dhchap_ctrlr_key": "ckey2", 00:17:03.566 "allow_unrecognized_csi": false, 00:17:03.566 "method": "bdev_nvme_attach_controller", 00:17:03.566 "req_id": 1 00:17:03.566 } 00:17:03.566 Got JSON-RPC error response 00:17:03.566 response: 00:17:03.566 { 00:17:03.566 "code": -5, 00:17:03.566 "message": "Input/output error" 00:17:03.566 } 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.566 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.498 request: 00:17:04.498 { 00:17:04.498 "name": "nvme0", 00:17:04.498 "trtype": "tcp", 00:17:04.498 "traddr": "10.0.0.2", 00:17:04.498 "adrfam": "ipv4", 00:17:04.498 "trsvcid": "4420", 00:17:04.498 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:04.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:04.498 "prchk_reftag": false, 00:17:04.498 "prchk_guard": false, 00:17:04.498 "hdgst": false, 00:17:04.498 "ddgst": false, 00:17:04.498 "dhchap_key": "key1", 00:17:04.498 "dhchap_ctrlr_key": "ckey1", 00:17:04.498 "allow_unrecognized_csi": false, 00:17:04.498 "method": "bdev_nvme_attach_controller", 00:17:04.498 "req_id": 1 00:17:04.498 } 00:17:04.498 Got JSON-RPC error response 00:17:04.498 response: 00:17:04.498 { 00:17:04.498 "code": -5, 00:17:04.498 "message": "Input/output error" 00:17:04.498 } 00:17:04.498 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:04.498 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:04.498 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:04.498 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:04.498 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:04.498 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.498 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.498 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.498 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 217205 00:17:04.498 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 217205 ']' 00:17:04.498 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 217205 00:17:04.498 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:04.498 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.498 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 217205 00:17:04.498 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:04.498 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:04.498 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 217205' 00:17:04.498 killing process with pid 217205 00:17:04.498 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 217205 00:17:04.498 04:07:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 217205 00:17:04.755 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:04.755 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:04.755 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:04.755 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.755 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=240449 00:17:04.755 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:04.755 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 240449 00:17:04.755 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 240449 ']' 00:17:04.755 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.755 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:04.755 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.755 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:04.755 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.013 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.013 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:05.013 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:05.013 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:05.013 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.013 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:05.013 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:05.013 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 240449 00:17:05.013 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 240449 ']' 00:17:05.013 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.013 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.013 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.013 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.013 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.271 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:05.271 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:05.271 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:05.271 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.271 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.528 null0 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.PM3 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.RCN ]] 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RCN 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.tVR 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.1NG ]] 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.1NG 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.r4l 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Fwb ]] 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Fwb 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:05.528 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.y9n 00:17:05.529 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.529 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.529 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.529 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:05.529 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:05.529 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.529 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.529 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:05.529 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:05.529 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.529 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:05.529 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.529 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.529 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.529 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:05.529 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:05.529 04:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:06.908 nvme0n1 00:17:06.908 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:06.908 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:06.908 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.165 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.165 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.165 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.165 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.165 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.165 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.165 { 00:17:07.165 "cntlid": 1, 00:17:07.165 "qid": 0, 00:17:07.165 "state": "enabled", 00:17:07.165 "thread": "nvmf_tgt_poll_group_000", 00:17:07.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:07.165 "listen_address": { 00:17:07.165 "trtype": "TCP", 00:17:07.165 "adrfam": "IPv4", 00:17:07.165 "traddr": "10.0.0.2", 00:17:07.165 "trsvcid": "4420" 00:17:07.165 }, 00:17:07.165 "peer_address": { 00:17:07.165 "trtype": "TCP", 00:17:07.165 "adrfam": "IPv4", 00:17:07.165 "traddr": "10.0.0.1", 00:17:07.165 "trsvcid": "43638" 00:17:07.165 }, 00:17:07.165 "auth": { 00:17:07.165 "state": "completed", 00:17:07.165 "digest": "sha512", 00:17:07.165 "dhgroup": "ffdhe8192" 00:17:07.165 } 00:17:07.165 } 00:17:07.165 ]' 00:17:07.165 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.165 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.165 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.165 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.165 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.165 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.165 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.165 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.422 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:17:07.422 04:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:17:08.353 04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.353 04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:08.353 04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.353 04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.353 04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.353 04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:08.353 04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.353 04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.353 04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.353 04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:08.353 04:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:08.610 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:08.610 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:08.610 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:08.610 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:08.610 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.610 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:08.610 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.610 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:08.610 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.610 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:08.866 request: 00:17:08.866 { 00:17:08.866 "name": "nvme0", 00:17:08.866 "trtype": "tcp", 00:17:08.866 "traddr": "10.0.0.2", 00:17:08.866 "adrfam": "ipv4", 00:17:08.866 "trsvcid": "4420", 00:17:08.866 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:08.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:08.866 "prchk_reftag": false, 00:17:08.866 "prchk_guard": false, 00:17:08.866 "hdgst": false, 00:17:08.866 "ddgst": false, 00:17:08.866 "dhchap_key": "key3", 00:17:08.866 "allow_unrecognized_csi": false, 00:17:08.866 "method": "bdev_nvme_attach_controller", 00:17:08.866 "req_id": 1 00:17:08.866 } 00:17:08.867 Got JSON-RPC error response 00:17:08.867 response: 00:17:08.867 { 00:17:08.867 "code": -5, 00:17:08.867 "message": "Input/output error" 00:17:08.867 } 00:17:09.123 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:09.123 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.123 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.123 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.123 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:09.123 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:09.123 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:09.123 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:09.381 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:09.381 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:09.381 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:09.381 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:09.381 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.381 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:09.381 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.381 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:09.381 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.381 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.638 request: 00:17:09.638 { 00:17:09.638 "name": "nvme0", 00:17:09.638 "trtype": "tcp", 00:17:09.638 "traddr": "10.0.0.2", 00:17:09.639 "adrfam": "ipv4", 00:17:09.639 "trsvcid": "4420", 00:17:09.639 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:09.639 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:09.639 "prchk_reftag": false, 00:17:09.639 "prchk_guard": false, 00:17:09.639 "hdgst": false, 00:17:09.639 "ddgst": false, 00:17:09.639 "dhchap_key": "key3", 00:17:09.639 "allow_unrecognized_csi": false, 00:17:09.639 "method": "bdev_nvme_attach_controller", 00:17:09.639 "req_id": 1 00:17:09.639 } 00:17:09.639 Got JSON-RPC error response 00:17:09.639 response: 00:17:09.639 { 00:17:09.639 "code": -5, 00:17:09.639 "message": "Input/output error" 00:17:09.639 } 00:17:09.639 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:09.639 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.639 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.639 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.639 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:09.639 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:09.639 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:09.639 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:09.639 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:09.639 04:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:09.896 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.896 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.896 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.896 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.896 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.896 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.896 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.896 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.896 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.896 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:09.896 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.896 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:09.896 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.896 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:09.896 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.896 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.896 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:09.896 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:10.461 request: 00:17:10.461 { 00:17:10.461 "name": "nvme0", 00:17:10.461 "trtype": "tcp", 00:17:10.461 "traddr": "10.0.0.2", 00:17:10.461 "adrfam": "ipv4", 00:17:10.461 "trsvcid": "4420", 00:17:10.461 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:10.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:10.461 "prchk_reftag": false, 00:17:10.461 "prchk_guard": false, 00:17:10.461 "hdgst": false, 00:17:10.461 "ddgst": false, 00:17:10.461 "dhchap_key": "key0", 00:17:10.461 "dhchap_ctrlr_key": "key1", 00:17:10.461 "allow_unrecognized_csi": false, 00:17:10.461 "method": "bdev_nvme_attach_controller", 00:17:10.461 "req_id": 1 00:17:10.461 } 00:17:10.461 Got JSON-RPC error response 00:17:10.461 response: 00:17:10.461 { 00:17:10.461 "code": -5, 00:17:10.461 "message": "Input/output error" 00:17:10.461 } 00:17:10.461 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:10.461 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:10.461 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:10.461 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:10.461 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:10.461 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:10.461 04:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:10.719 nvme0n1 00:17:10.719 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:10.719 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:10.719 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.976 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.976 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.976 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.234 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:11.234 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.234 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.234 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.234 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:11.234 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:11.234 04:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:12.609 nvme0n1 00:17:12.609 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:12.609 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:12.609 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.868 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.868 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:12.868 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.868 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.868 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.868 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:12.868 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:12.868 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.126 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.126 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:17:13.126 04:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: --dhchap-ctrl-secret DHHC-1:03:MzkwMjgzMjZmM2IzMzViZmVlOWU5NjE4NmFiNWQzZWRmNzBiMWUwZjZiMGE1YzMxNWZjYzBjODA5MDlkNWQ3MLRoaLQ=: 00:17:14.058 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:14.058 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:14.058 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:14.058 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:14.058 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:14.058 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:14.058 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:14.058 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.058 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.315 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:14.316 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:14.316 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:14.316 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:14.316 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.316 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:14.316 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.316 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:14.316 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:14.316 04:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:15.288 request: 00:17:15.288 { 00:17:15.288 "name": "nvme0", 00:17:15.288 "trtype": "tcp", 00:17:15.288 "traddr": "10.0.0.2", 00:17:15.288 "adrfam": "ipv4", 00:17:15.288 "trsvcid": "4420", 00:17:15.288 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:15.288 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:15.288 "prchk_reftag": false, 00:17:15.288 "prchk_guard": false, 00:17:15.288 "hdgst": false, 00:17:15.288 "ddgst": false, 00:17:15.288 "dhchap_key": "key1", 00:17:15.288 "allow_unrecognized_csi": false, 00:17:15.288 "method": "bdev_nvme_attach_controller", 00:17:15.288 "req_id": 1 00:17:15.288 } 00:17:15.288 Got JSON-RPC error response 00:17:15.288 response: 00:17:15.288 { 00:17:15.288 "code": -5, 00:17:15.288 "message": "Input/output error" 00:17:15.288 } 00:17:15.288 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:15.288 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:15.288 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:15.288 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:15.288 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:15.288 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:15.288 04:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:16.659 nvme0n1 00:17:16.659 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:16.659 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.659 04:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:16.915 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.915 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.915 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.172 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:17.172 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.172 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.172 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.172 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:17.172 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:17.172 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:17.430 nvme0n1 00:17:17.430 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:17.430 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:17.430 04:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.687 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.687 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.687 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.945 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:17.945 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.945 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.945 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.945 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: '' 2s 00:17:17.945 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:17.945 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:17.945 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: 00:17:17.945 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:17.945 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:17.945 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:17.945 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: ]] 00:17:17.945 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZGNmOGI1MzZlMTQ1MTM2ODhkOGY0OThlM2EyN2U2ZjfRGgzy: 00:17:17.945 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:17.945 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:17.945 04:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: 2s 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: ]] 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:N2QyMjUyYjRmNzRlN2RmNDM0YmJiNjRjMTQ5MjcxY2MxYzI0MDdjOWYzZDg0MDBiHbP11g==: 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:20.471 04:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:22.371 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:22.371 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:22.371 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:22.371 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:22.371 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:22.371 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:22.371 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:22.371 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.371 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:22.371 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.371 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.371 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.371 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:22.371 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:22.371 04:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:23.743 nvme0n1 00:17:23.743 04:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:23.743 04:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.743 04:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.743 04:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.743 04:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:23.743 04:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:24.306 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:24.306 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:24.306 04:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.562 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.562 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:24.562 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.562 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.562 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.562 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:24.562 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:24.820 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:24.820 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.820 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:25.077 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.077 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:25.077 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.077 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.077 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.077 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:25.077 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:25.077 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:25.077 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:25.077 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.077 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:25.077 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:25.077 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:25.077 04:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:26.007 request: 00:17:26.007 { 00:17:26.007 "name": "nvme0", 00:17:26.007 "dhchap_key": "key1", 00:17:26.007 "dhchap_ctrlr_key": "key3", 00:17:26.007 "method": "bdev_nvme_set_keys", 00:17:26.007 "req_id": 1 00:17:26.007 } 00:17:26.007 Got JSON-RPC error response 00:17:26.007 response: 00:17:26.007 { 00:17:26.007 "code": -13, 00:17:26.007 "message": "Permission denied" 00:17:26.007 } 00:17:26.007 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:26.007 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:26.007 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:26.007 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:26.007 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:26.007 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:26.007 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.263 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:26.263 04:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:27.193 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:27.193 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:27.193 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.450 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:27.450 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:27.450 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.450 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.450 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.450 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:27.450 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:27.450 04:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:28.821 nvme0n1 00:17:28.821 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:28.821 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.821 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.821 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.821 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:28.821 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:28.821 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:28.821 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:28.821 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.821 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:28.821 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:28.821 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:28.821 04:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:29.754 request: 00:17:29.754 { 00:17:29.754 "name": "nvme0", 00:17:29.754 "dhchap_key": "key2", 00:17:29.754 "dhchap_ctrlr_key": "key0", 00:17:29.754 "method": "bdev_nvme_set_keys", 00:17:29.754 "req_id": 1 00:17:29.754 } 00:17:29.754 Got JSON-RPC error response 00:17:29.754 response: 00:17:29.754 { 00:17:29.754 "code": -13, 00:17:29.754 "message": "Permission denied" 00:17:29.754 } 00:17:29.754 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:29.754 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:29.754 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:29.754 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:29.754 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:29.754 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.754 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:30.012 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:30.012 04:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:30.944 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:30.944 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:30.944 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.203 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:31.203 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:31.203 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:31.203 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 217228 00:17:31.203 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 217228 ']' 00:17:31.203 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 217228 00:17:31.203 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:31.203 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.203 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 217228 00:17:31.461 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:31.461 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:31.461 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 217228' 00:17:31.461 killing process with pid 217228 00:17:31.461 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 217228 00:17:31.461 04:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 217228 00:17:31.718 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:31.718 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:31.718 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:17:31.718 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:31.719 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:17:31.719 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:31.719 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:31.719 rmmod nvme_tcp 00:17:31.719 rmmod nvme_fabrics 00:17:31.719 rmmod nvme_keyring 00:17:31.719 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:31.719 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:17:31.719 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:17:31.719 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 240449 ']' 00:17:31.719 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 240449 00:17:31.719 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 240449 ']' 00:17:31.719 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 240449 00:17:31.719 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:31.719 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.719 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 240449 00:17:31.719 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:31.719 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:31.719 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 240449' 00:17:31.719 killing process with pid 240449 00:17:31.719 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 240449 00:17:31.719 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 240449 00:17:31.977 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:31.977 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:31.977 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:31.977 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:17:31.977 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:17:31.977 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:31.977 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:17:31.977 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:31.977 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:31.977 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.977 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:31.977 04:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.PM3 /tmp/spdk.key-sha256.tVR /tmp/spdk.key-sha384.r4l /tmp/spdk.key-sha512.y9n /tmp/spdk.key-sha512.RCN /tmp/spdk.key-sha384.1NG /tmp/spdk.key-sha256.Fwb '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:34.523 00:17:34.523 real 3m33.435s 00:17:34.523 user 8m19.309s 00:17:34.523 sys 0m28.198s 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.523 ************************************ 00:17:34.523 END TEST nvmf_auth_target 00:17:34.523 ************************************ 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:34.523 ************************************ 00:17:34.523 START TEST nvmf_bdevio_no_huge 00:17:34.523 ************************************ 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:34.523 * Looking for test storage... 00:17:34.523 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:34.523 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:34.523 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.523 --rc genhtml_branch_coverage=1 00:17:34.524 --rc genhtml_function_coverage=1 00:17:34.524 --rc genhtml_legend=1 00:17:34.524 --rc geninfo_all_blocks=1 00:17:34.524 --rc geninfo_unexecuted_blocks=1 00:17:34.524 00:17:34.524 ' 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:34.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.524 --rc genhtml_branch_coverage=1 00:17:34.524 --rc genhtml_function_coverage=1 00:17:34.524 --rc genhtml_legend=1 00:17:34.524 --rc geninfo_all_blocks=1 00:17:34.524 --rc geninfo_unexecuted_blocks=1 00:17:34.524 00:17:34.524 ' 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:34.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.524 --rc genhtml_branch_coverage=1 00:17:34.524 --rc genhtml_function_coverage=1 00:17:34.524 --rc genhtml_legend=1 00:17:34.524 --rc geninfo_all_blocks=1 00:17:34.524 --rc geninfo_unexecuted_blocks=1 00:17:34.524 00:17:34.524 ' 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:34.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:34.524 --rc genhtml_branch_coverage=1 00:17:34.524 --rc genhtml_function_coverage=1 00:17:34.524 --rc genhtml_legend=1 00:17:34.524 --rc geninfo_all_blocks=1 00:17:34.524 --rc geninfo_unexecuted_blocks=1 00:17:34.524 00:17:34.524 ' 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:34.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:17:34.524 04:08:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:36.422 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:36.423 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:36.423 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:36.423 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:36.423 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:36.423 04:08:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:36.681 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:36.681 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:17:36.681 00:17:36.681 --- 10.0.0.2 ping statistics --- 00:17:36.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.681 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:36.681 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:36.681 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:17:36.681 00:17:36.681 --- 10.0.0.1 ping statistics --- 00:17:36.681 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:36.681 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=245695 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 245695 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 245695 ']' 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:36.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.681 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.681 [2024-12-09 04:08:05.185001] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:17:36.681 [2024-12-09 04:08:05.185097] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:36.939 [2024-12-09 04:08:05.265362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:36.939 [2024-12-09 04:08:05.320102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:36.939 [2024-12-09 04:08:05.320169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:36.939 [2024-12-09 04:08:05.320192] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:36.939 [2024-12-09 04:08:05.320202] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:36.939 [2024-12-09 04:08:05.320213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:36.939 [2024-12-09 04:08:05.321186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:36.939 [2024-12-09 04:08:05.321248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:36.939 [2024-12-09 04:08:05.321315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:36.939 [2024-12-09 04:08:05.321321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.939 [2024-12-09 04:08:05.473239] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.939 Malloc0 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.939 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:36.939 [2024-12-09 04:08:05.511682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.196 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.196 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:37.196 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:37.196 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:17:37.196 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:17:37.196 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:17:37.196 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:17:37.196 { 00:17:37.196 "params": { 00:17:37.196 "name": "Nvme$subsystem", 00:17:37.196 "trtype": "$TEST_TRANSPORT", 00:17:37.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:37.196 "adrfam": "ipv4", 00:17:37.196 "trsvcid": "$NVMF_PORT", 00:17:37.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:37.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:37.196 "hdgst": ${hdgst:-false}, 00:17:37.196 "ddgst": ${ddgst:-false} 00:17:37.196 }, 00:17:37.196 "method": "bdev_nvme_attach_controller" 00:17:37.196 } 00:17:37.196 EOF 00:17:37.196 )") 00:17:37.196 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:17:37.196 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:17:37.196 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:17:37.197 04:08:05 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:17:37.197 "params": { 00:17:37.197 "name": "Nvme1", 00:17:37.197 "trtype": "tcp", 00:17:37.197 "traddr": "10.0.0.2", 00:17:37.197 "adrfam": "ipv4", 00:17:37.197 "trsvcid": "4420", 00:17:37.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:37.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:37.197 "hdgst": false, 00:17:37.197 "ddgst": false 00:17:37.197 }, 00:17:37.197 "method": "bdev_nvme_attach_controller" 00:17:37.197 }' 00:17:37.197 [2024-12-09 04:08:05.560847] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:17:37.197 [2024-12-09 04:08:05.560923] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid245803 ] 00:17:37.197 [2024-12-09 04:08:05.632521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:37.197 [2024-12-09 04:08:05.698305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.197 [2024-12-09 04:08:05.698332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.197 [2024-12-09 04:08:05.698336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.761 I/O targets: 00:17:37.761 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:37.761 00:17:37.761 00:17:37.761 CUnit - A unit testing framework for C - Version 2.1-3 00:17:37.761 http://cunit.sourceforge.net/ 00:17:37.761 00:17:37.761 00:17:37.761 Suite: bdevio tests on: Nvme1n1 00:17:37.761 Test: blockdev write read block ...passed 00:17:37.761 Test: blockdev write zeroes read block ...passed 00:17:37.761 Test: blockdev write zeroes read no split ...passed 00:17:37.761 Test: blockdev write zeroes read split ...passed 00:17:37.761 Test: blockdev write zeroes read split partial ...passed 00:17:37.761 Test: blockdev reset ...[2024-12-09 04:08:06.175364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:37.761 [2024-12-09 04:08:06.175480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c62b0 (9): Bad file descriptor 00:17:37.761 [2024-12-09 04:08:06.195896] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:37.761 passed 00:17:37.761 Test: blockdev write read 8 blocks ...passed 00:17:37.761 Test: blockdev write read size > 128k ...passed 00:17:37.761 Test: blockdev write read invalid size ...passed 00:17:37.761 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:37.761 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:37.761 Test: blockdev write read max offset ...passed 00:17:38.019 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:38.019 Test: blockdev writev readv 8 blocks ...passed 00:17:38.019 Test: blockdev writev readv 30 x 1block ...passed 00:17:38.019 Test: blockdev writev readv block ...passed 00:17:38.019 Test: blockdev writev readv size > 128k ...passed 00:17:38.019 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:38.019 Test: blockdev comparev and writev ...[2024-12-09 04:08:06.409404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.019 [2024-12-09 04:08:06.409441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:38.019 [2024-12-09 04:08:06.409465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.019 [2024-12-09 04:08:06.409483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:38.019 [2024-12-09 04:08:06.409813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.019 [2024-12-09 04:08:06.409837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:38.019 [2024-12-09 04:08:06.409860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.019 [2024-12-09 04:08:06.409876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:38.019 [2024-12-09 04:08:06.410201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.019 [2024-12-09 04:08:06.410226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:38.019 [2024-12-09 04:08:06.410248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.019 [2024-12-09 04:08:06.410264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:38.019 [2024-12-09 04:08:06.410606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.019 [2024-12-09 04:08:06.410631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:38.019 [2024-12-09 04:08:06.410652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:38.019 [2024-12-09 04:08:06.410668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:38.019 passed 00:17:38.019 Test: blockdev nvme passthru rw ...passed 00:17:38.019 Test: blockdev nvme passthru vendor specific ...[2024-12-09 04:08:06.493502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:38.020 [2024-12-09 04:08:06.493531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:38.020 [2024-12-09 04:08:06.493670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:38.020 [2024-12-09 04:08:06.493693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:38.020 [2024-12-09 04:08:06.493820] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:38.020 [2024-12-09 04:08:06.493844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:38.020 [2024-12-09 04:08:06.493982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:38.020 [2024-12-09 04:08:06.494006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:38.020 passed 00:17:38.020 Test: blockdev nvme admin passthru ...passed 00:17:38.020 Test: blockdev copy ...passed 00:17:38.020 00:17:38.020 Run Summary: Type Total Ran Passed Failed Inactive 00:17:38.020 suites 1 1 n/a 0 0 00:17:38.020 tests 23 23 23 0 0 00:17:38.020 asserts 152 152 152 0 n/a 00:17:38.020 00:17:38.020 Elapsed time = 0.986 seconds 00:17:38.585 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:38.586 rmmod nvme_tcp 00:17:38.586 rmmod nvme_fabrics 00:17:38.586 rmmod nvme_keyring 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 245695 ']' 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 245695 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 245695 ']' 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 245695 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 245695 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 245695' 00:17:38.586 killing process with pid 245695 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 245695 00:17:38.586 04:08:06 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 245695 00:17:38.844 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:38.844 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:38.844 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:38.844 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:17:38.844 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:17:38.844 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:38.844 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:17:38.844 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:38.844 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:38.844 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.844 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.844 04:08:07 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:41.378 00:17:41.378 real 0m6.822s 00:17:41.378 user 0m11.223s 00:17:41.378 sys 0m2.633s 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:41.378 ************************************ 00:17:41.378 END TEST nvmf_bdevio_no_huge 00:17:41.378 ************************************ 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:41.378 ************************************ 00:17:41.378 START TEST nvmf_tls 00:17:41.378 ************************************ 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:41.378 * Looking for test storage... 00:17:41.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:41.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.378 --rc genhtml_branch_coverage=1 00:17:41.378 --rc genhtml_function_coverage=1 00:17:41.378 --rc genhtml_legend=1 00:17:41.378 --rc geninfo_all_blocks=1 00:17:41.378 --rc geninfo_unexecuted_blocks=1 00:17:41.378 00:17:41.378 ' 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:41.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.378 --rc genhtml_branch_coverage=1 00:17:41.378 --rc genhtml_function_coverage=1 00:17:41.378 --rc genhtml_legend=1 00:17:41.378 --rc geninfo_all_blocks=1 00:17:41.378 --rc geninfo_unexecuted_blocks=1 00:17:41.378 00:17:41.378 ' 00:17:41.378 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:41.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.379 --rc genhtml_branch_coverage=1 00:17:41.379 --rc genhtml_function_coverage=1 00:17:41.379 --rc genhtml_legend=1 00:17:41.379 --rc geninfo_all_blocks=1 00:17:41.379 --rc geninfo_unexecuted_blocks=1 00:17:41.379 00:17:41.379 ' 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:41.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.379 --rc genhtml_branch_coverage=1 00:17:41.379 --rc genhtml_function_coverage=1 00:17:41.379 --rc genhtml_legend=1 00:17:41.379 --rc geninfo_all_blocks=1 00:17:41.379 --rc geninfo_unexecuted_blocks=1 00:17:41.379 00:17:41.379 ' 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:41.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:17:41.379 04:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.280 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:43.281 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:43.281 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:43.281 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:43.281 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:43.281 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:43.539 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.539 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:17:43.539 00:17:43.539 --- 10.0.0.2 ping statistics --- 00:17:43.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.539 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:43.539 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.539 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:17:43.539 00:17:43.539 --- 10.0.0.1 ping statistics --- 00:17:43.539 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.539 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=247923 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 247923 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 247923 ']' 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.539 04:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.539 [2024-12-09 04:08:11.962404] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:17:43.539 [2024-12-09 04:08:11.962499] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.539 [2024-12-09 04:08:12.034790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.539 [2024-12-09 04:08:12.088108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.539 [2024-12-09 04:08:12.088167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.539 [2024-12-09 04:08:12.088189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:43.539 [2024-12-09 04:08:12.088200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:43.540 [2024-12-09 04:08:12.088209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.540 [2024-12-09 04:08:12.088829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.797 04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.797 04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:43.797 04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:43.797 04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:43.797 04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:43.797 04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.797 04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:17:43.797 04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:44.055 true 00:17:44.055 04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:44.055 04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:17:44.312 04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:17:44.312 04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:17:44.312 04:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:44.570 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:44.570 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:17:44.827 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:17:44.827 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:17:44.827 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:45.084 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:45.084 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:17:45.341 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:17:45.341 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:17:45.341 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:45.341 04:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:17:45.599 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:17:45.599 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:17:45.599 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:45.856 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:45.856 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:17:46.113 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:17:46.113 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:17:46.113 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:46.371 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:46.371 04:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:17:46.936 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:17:46.936 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:17:46.936 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:46.936 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.ApVRvtgogO 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.lhfP9KGslT 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.ApVRvtgogO 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.lhfP9KGslT 00:17:46.937 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:47.194 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:47.451 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.ApVRvtgogO 00:17:47.451 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.ApVRvtgogO 00:17:47.451 04:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:47.708 [2024-12-09 04:08:16.248562] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:47.708 04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:48.272 04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:48.272 [2024-12-09 04:08:16.846158] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:48.272 [2024-12-09 04:08:16.846492] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.529 04:08:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:48.787 malloc0 00:17:48.787 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:49.044 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.ApVRvtgogO 00:17:49.300 04:08:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:49.558 04:08:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.ApVRvtgogO 00:18:01.743 Initializing NVMe Controllers 00:18:01.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:01.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:01.743 Initialization complete. Launching workers. 00:18:01.743 ======================================================== 00:18:01.743 Latency(us) 00:18:01.743 Device Information : IOPS MiB/s Average min max 00:18:01.743 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8719.79 34.06 7341.62 1130.13 8902.20 00:18:01.743 ======================================================== 00:18:01.743 Total : 8719.79 34.06 7341.62 1130.13 8902.20 00:18:01.743 00:18:01.743 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ApVRvtgogO 00:18:01.743 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:01.743 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:01.743 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:01.743 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ApVRvtgogO 00:18:01.743 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:01.743 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=249828 00:18:01.743 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:01.743 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 249828 /var/tmp/bdevperf.sock 00:18:01.743 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:01.743 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 249828 ']' 00:18:01.743 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:01.743 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.743 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:01.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:01.743 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.743 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.743 [2024-12-09 04:08:28.183337] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:18:01.743 [2024-12-09 04:08:28.183424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid249828 ] 00:18:01.743 [2024-12-09 04:08:28.256787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.743 [2024-12-09 04:08:28.317932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.743 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.743 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:01.743 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ApVRvtgogO 00:18:01.743 04:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:01.743 [2024-12-09 04:08:28.979424] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:01.743 TLSTESTn1 00:18:01.743 04:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:01.743 Running I/O for 10 seconds... 00:18:02.676 3112.00 IOPS, 12.16 MiB/s [2024-12-09T03:08:32.187Z] 3296.50 IOPS, 12.88 MiB/s [2024-12-09T03:08:33.560Z] 3389.33 IOPS, 13.24 MiB/s [2024-12-09T03:08:34.494Z] 3397.25 IOPS, 13.27 MiB/s [2024-12-09T03:08:35.432Z] 3394.40 IOPS, 13.26 MiB/s [2024-12-09T03:08:36.375Z] 3390.67 IOPS, 13.24 MiB/s [2024-12-09T03:08:37.306Z] 3390.57 IOPS, 13.24 MiB/s [2024-12-09T03:08:38.237Z] 3414.00 IOPS, 13.34 MiB/s [2024-12-09T03:08:39.608Z] 3404.11 IOPS, 13.30 MiB/s [2024-12-09T03:08:39.608Z] 3403.10 IOPS, 13.29 MiB/s 00:18:11.032 Latency(us) 00:18:11.032 [2024-12-09T03:08:39.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.032 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:11.032 Verification LBA range: start 0x0 length 0x2000 00:18:11.032 TLSTESTn1 : 10.02 3409.50 13.32 0.00 0.00 37481.12 6068.15 36117.62 00:18:11.032 [2024-12-09T03:08:39.608Z] =================================================================================================================== 00:18:11.032 [2024-12-09T03:08:39.608Z] Total : 3409.50 13.32 0.00 0.00 37481.12 6068.15 36117.62 00:18:11.032 { 00:18:11.032 "results": [ 00:18:11.032 { 00:18:11.032 "job": "TLSTESTn1", 00:18:11.032 "core_mask": "0x4", 00:18:11.032 "workload": "verify", 00:18:11.032 "status": "finished", 00:18:11.032 "verify_range": { 00:18:11.032 "start": 0, 00:18:11.032 "length": 8192 00:18:11.032 }, 00:18:11.032 "queue_depth": 128, 00:18:11.032 "io_size": 4096, 00:18:11.032 "runtime": 10.018171, 00:18:11.032 "iops": 3409.5045892109447, 00:18:11.032 "mibps": 13.318377301605253, 00:18:11.032 "io_failed": 0, 00:18:11.032 "io_timeout": 0, 00:18:11.032 "avg_latency_us": 37481.11611675499, 00:18:11.032 "min_latency_us": 6068.148148148148, 00:18:11.032 "max_latency_us": 36117.61777777778 00:18:11.032 } 00:18:11.032 ], 00:18:11.032 "core_count": 1 00:18:11.032 } 00:18:11.032 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:11.032 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 249828 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 249828 ']' 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 249828 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 249828 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 249828' 00:18:11.033 killing process with pid 249828 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 249828 00:18:11.033 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.033 00:18:11.033 Latency(us) 00:18:11.033 [2024-12-09T03:08:39.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.033 [2024-12-09T03:08:39.609Z] =================================================================================================================== 00:18:11.033 [2024-12-09T03:08:39.609Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 249828 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lhfP9KGslT 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lhfP9KGslT 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lhfP9KGslT 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lhfP9KGslT 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=251146 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 251146 /var/tmp/bdevperf.sock 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 251146 ']' 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:11.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:11.033 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:11.033 [2024-12-09 04:08:39.549155] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:18:11.033 [2024-12-09 04:08:39.549243] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251146 ] 00:18:11.291 [2024-12-09 04:08:39.620804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.291 [2024-12-09 04:08:39.678406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:11.291 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:11.291 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:11.291 04:08:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lhfP9KGslT 00:18:11.549 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:11.806 [2024-12-09 04:08:40.334170] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:11.806 [2024-12-09 04:08:40.341284] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:11.806 [2024-12-09 04:08:40.341581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1033f30 (107): Transport endpoint is not connected 00:18:11.806 [2024-12-09 04:08:40.342570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1033f30 (9): Bad file descriptor 00:18:11.806 [2024-12-09 04:08:40.343569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:11.806 [2024-12-09 04:08:40.343604] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:11.806 [2024-12-09 04:08:40.343626] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:11.806 [2024-12-09 04:08:40.343643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:11.806 request: 00:18:11.806 { 00:18:11.806 "name": "TLSTEST", 00:18:11.806 "trtype": "tcp", 00:18:11.806 "traddr": "10.0.0.2", 00:18:11.806 "adrfam": "ipv4", 00:18:11.806 "trsvcid": "4420", 00:18:11.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:11.806 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:11.806 "prchk_reftag": false, 00:18:11.806 "prchk_guard": false, 00:18:11.806 "hdgst": false, 00:18:11.806 "ddgst": false, 00:18:11.806 "psk": "key0", 00:18:11.806 "allow_unrecognized_csi": false, 00:18:11.806 "method": "bdev_nvme_attach_controller", 00:18:11.806 "req_id": 1 00:18:11.806 } 00:18:11.806 Got JSON-RPC error response 00:18:11.806 response: 00:18:11.806 { 00:18:11.806 "code": -5, 00:18:11.806 "message": "Input/output error" 00:18:11.806 } 00:18:11.806 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 251146 00:18:11.806 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 251146 ']' 00:18:11.806 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 251146 00:18:11.806 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:11.806 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.806 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 251146 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 251146' 00:18:12.064 killing process with pid 251146 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 251146 00:18:12.064 Received shutdown signal, test time was about 10.000000 seconds 00:18:12.064 00:18:12.064 Latency(us) 00:18:12.064 [2024-12-09T03:08:40.640Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.064 [2024-12-09T03:08:40.640Z] =================================================================================================================== 00:18:12.064 [2024-12-09T03:08:40.640Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 251146 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ApVRvtgogO 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ApVRvtgogO 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.ApVRvtgogO 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ApVRvtgogO 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=251288 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 251288 /var/tmp/bdevperf.sock 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 251288 ']' 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.064 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:12.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:12.065 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.065 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.322 [2024-12-09 04:08:40.671670] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:18:12.322 [2024-12-09 04:08:40.671756] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251288 ] 00:18:12.322 [2024-12-09 04:08:40.744061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.322 [2024-12-09 04:08:40.804526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.580 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.580 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:12.580 04:08:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ApVRvtgogO 00:18:12.837 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:13.096 [2024-12-09 04:08:41.435210] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:13.096 [2024-12-09 04:08:41.444070] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:13.096 [2024-12-09 04:08:41.444104] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:13.096 [2024-12-09 04:08:41.444141] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:13.096 [2024-12-09 04:08:41.444438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4ff30 (107): Transport endpoint is not connected 00:18:13.096 [2024-12-09 04:08:41.445427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4ff30 (9): Bad file descriptor 00:18:13.096 [2024-12-09 04:08:41.446426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:13.096 [2024-12-09 04:08:41.446447] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:13.096 [2024-12-09 04:08:41.446461] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:13.096 [2024-12-09 04:08:41.446478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:13.096 request: 00:18:13.096 { 00:18:13.096 "name": "TLSTEST", 00:18:13.096 "trtype": "tcp", 00:18:13.096 "traddr": "10.0.0.2", 00:18:13.096 "adrfam": "ipv4", 00:18:13.096 "trsvcid": "4420", 00:18:13.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.096 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:13.096 "prchk_reftag": false, 00:18:13.096 "prchk_guard": false, 00:18:13.096 "hdgst": false, 00:18:13.096 "ddgst": false, 00:18:13.096 "psk": "key0", 00:18:13.096 "allow_unrecognized_csi": false, 00:18:13.096 "method": "bdev_nvme_attach_controller", 00:18:13.096 "req_id": 1 00:18:13.096 } 00:18:13.096 Got JSON-RPC error response 00:18:13.096 response: 00:18:13.096 { 00:18:13.096 "code": -5, 00:18:13.096 "message": "Input/output error" 00:18:13.096 } 00:18:13.096 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 251288 00:18:13.096 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 251288 ']' 00:18:13.096 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 251288 00:18:13.096 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:13.096 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.096 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 251288 00:18:13.096 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:13.096 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:13.096 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 251288' 00:18:13.096 killing process with pid 251288 00:18:13.096 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 251288 00:18:13.096 Received shutdown signal, test time was about 10.000000 seconds 00:18:13.096 00:18:13.096 Latency(us) 00:18:13.096 [2024-12-09T03:08:41.672Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:13.096 [2024-12-09T03:08:41.672Z] =================================================================================================================== 00:18:13.096 [2024-12-09T03:08:41.672Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:13.096 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 251288 00:18:13.355 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:13.355 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:13.355 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:13.355 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:13.355 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ApVRvtgogO 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ApVRvtgogO 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.ApVRvtgogO 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ApVRvtgogO 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=251428 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 251428 /var/tmp/bdevperf.sock 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 251428 ']' 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:13.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.356 04:08:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.356 [2024-12-09 04:08:41.778806] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:18:13.356 [2024-12-09 04:08:41.778892] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251428 ] 00:18:13.356 [2024-12-09 04:08:41.850925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.356 [2024-12-09 04:08:41.908513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.614 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.614 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:13.614 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ApVRvtgogO 00:18:13.872 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:14.129 [2024-12-09 04:08:42.542699] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:14.129 [2024-12-09 04:08:42.548309] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:14.130 [2024-12-09 04:08:42.548360] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:14.130 [2024-12-09 04:08:42.548412] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:14.130 [2024-12-09 04:08:42.548906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a9f30 (107): Transport endpoint is not connected 00:18:14.130 [2024-12-09 04:08:42.549896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a9f30 (9): Bad file descriptor 00:18:14.130 [2024-12-09 04:08:42.550895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:14.130 [2024-12-09 04:08:42.550915] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:14.130 [2024-12-09 04:08:42.550938] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:14.130 [2024-12-09 04:08:42.550956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:14.130 request: 00:18:14.130 { 00:18:14.130 "name": "TLSTEST", 00:18:14.130 "trtype": "tcp", 00:18:14.130 "traddr": "10.0.0.2", 00:18:14.130 "adrfam": "ipv4", 00:18:14.130 "trsvcid": "4420", 00:18:14.130 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:14.130 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:14.130 "prchk_reftag": false, 00:18:14.130 "prchk_guard": false, 00:18:14.130 "hdgst": false, 00:18:14.130 "ddgst": false, 00:18:14.130 "psk": "key0", 00:18:14.130 "allow_unrecognized_csi": false, 00:18:14.130 "method": "bdev_nvme_attach_controller", 00:18:14.130 "req_id": 1 00:18:14.130 } 00:18:14.130 Got JSON-RPC error response 00:18:14.130 response: 00:18:14.130 { 00:18:14.130 "code": -5, 00:18:14.130 "message": "Input/output error" 00:18:14.130 } 00:18:14.130 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 251428 00:18:14.130 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 251428 ']' 00:18:14.130 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 251428 00:18:14.130 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:14.130 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.130 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 251428 00:18:14.130 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:14.130 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:14.130 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 251428' 00:18:14.130 killing process with pid 251428 00:18:14.130 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 251428 00:18:14.130 Received shutdown signal, test time was about 10.000000 seconds 00:18:14.130 00:18:14.130 Latency(us) 00:18:14.130 [2024-12-09T03:08:42.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:14.130 [2024-12-09T03:08:42.706Z] =================================================================================================================== 00:18:14.130 [2024-12-09T03:08:42.706Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:14.130 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 251428 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=251569 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 251569 /var/tmp/bdevperf.sock 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 251569 ']' 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:14.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:14.388 04:08:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.388 [2024-12-09 04:08:42.848710] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:18:14.388 [2024-12-09 04:08:42.848798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid251569 ] 00:18:14.388 [2024-12-09 04:08:42.916343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.646 [2024-12-09 04:08:42.976311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.646 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.646 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:14.646 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:14.903 [2024-12-09 04:08:43.332977] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:14.903 [2024-12-09 04:08:43.333025] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:14.903 request: 00:18:14.903 { 00:18:14.903 "name": "key0", 00:18:14.903 "path": "", 00:18:14.903 "method": "keyring_file_add_key", 00:18:14.903 "req_id": 1 00:18:14.903 } 00:18:14.903 Got JSON-RPC error response 00:18:14.903 response: 00:18:14.903 { 00:18:14.903 "code": -1, 00:18:14.903 "message": "Operation not permitted" 00:18:14.903 } 00:18:14.903 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:15.162 [2024-12-09 04:08:43.597804] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:15.162 [2024-12-09 04:08:43.597868] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:15.162 request: 00:18:15.162 { 00:18:15.162 "name": "TLSTEST", 00:18:15.162 "trtype": "tcp", 00:18:15.162 "traddr": "10.0.0.2", 00:18:15.162 "adrfam": "ipv4", 00:18:15.162 "trsvcid": "4420", 00:18:15.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:15.162 "prchk_reftag": false, 00:18:15.162 "prchk_guard": false, 00:18:15.162 "hdgst": false, 00:18:15.162 "ddgst": false, 00:18:15.162 "psk": "key0", 00:18:15.162 "allow_unrecognized_csi": false, 00:18:15.162 "method": "bdev_nvme_attach_controller", 00:18:15.162 "req_id": 1 00:18:15.162 } 00:18:15.162 Got JSON-RPC error response 00:18:15.162 response: 00:18:15.162 { 00:18:15.162 "code": -126, 00:18:15.162 "message": "Required key not available" 00:18:15.162 } 00:18:15.162 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 251569 00:18:15.162 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 251569 ']' 00:18:15.162 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 251569 00:18:15.162 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:15.162 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.162 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 251569 00:18:15.162 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:15.162 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:15.162 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 251569' 00:18:15.162 killing process with pid 251569 00:18:15.162 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 251569 00:18:15.162 Received shutdown signal, test time was about 10.000000 seconds 00:18:15.162 00:18:15.162 Latency(us) 00:18:15.162 [2024-12-09T03:08:43.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:15.162 [2024-12-09T03:08:43.738Z] =================================================================================================================== 00:18:15.162 [2024-12-09T03:08:43.738Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:15.162 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 251569 00:18:15.420 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:15.420 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:15.420 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:15.420 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:15.420 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:15.420 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 247923 00:18:15.420 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 247923 ']' 00:18:15.420 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 247923 00:18:15.420 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:15.420 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.420 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 247923 00:18:15.420 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:15.420 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:15.420 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 247923' 00:18:15.420 killing process with pid 247923 00:18:15.420 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 247923 00:18:15.420 04:08:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 247923 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.fbnd8laNn2 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.fbnd8laNn2 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=251842 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 251842 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 251842 ']' 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.678 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.678 [2024-12-09 04:08:44.252139] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:18:15.678 [2024-12-09 04:08:44.252244] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.935 [2024-12-09 04:08:44.323514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.935 [2024-12-09 04:08:44.378716] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.935 [2024-12-09 04:08:44.378773] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.935 [2024-12-09 04:08:44.378810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.935 [2024-12-09 04:08:44.378823] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.935 [2024-12-09 04:08:44.378832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.935 [2024-12-09 04:08:44.379406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.935 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.935 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:15.936 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:15.936 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:15.936 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.193 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.194 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.fbnd8laNn2 00:18:16.194 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fbnd8laNn2 00:18:16.194 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:16.451 [2024-12-09 04:08:44.772043] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.451 04:08:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:16.711 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:16.967 [2024-12-09 04:08:45.317517] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:16.967 [2024-12-09 04:08:45.317821] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.968 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:17.226 malloc0 00:18:17.226 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:17.483 04:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fbnd8laNn2 00:18:17.741 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:17.999 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fbnd8laNn2 00:18:17.999 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:17.999 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:17.999 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:17.999 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fbnd8laNn2 00:18:17.999 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:17.999 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=252126 00:18:17.999 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:17.999 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:17.999 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 252126 /var/tmp/bdevperf.sock 00:18:17.999 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 252126 ']' 00:18:17.999 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.999 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.999 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.999 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.999 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.999 [2024-12-09 04:08:46.522050] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:18:17.999 [2024-12-09 04:08:46.522127] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid252126 ] 00:18:18.257 [2024-12-09 04:08:46.588928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.258 [2024-12-09 04:08:46.645453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.258 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.258 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:18.258 04:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fbnd8laNn2 00:18:18.514 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:18.772 [2024-12-09 04:08:47.326823] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.030 TLSTESTn1 00:18:19.030 04:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:19.030 Running I/O for 10 seconds... 00:18:21.335 3374.00 IOPS, 13.18 MiB/s [2024-12-09T03:08:50.844Z] 3400.50 IOPS, 13.28 MiB/s [2024-12-09T03:08:51.778Z] 3417.67 IOPS, 13.35 MiB/s [2024-12-09T03:08:52.711Z] 3451.75 IOPS, 13.48 MiB/s [2024-12-09T03:08:53.643Z] 3440.60 IOPS, 13.44 MiB/s [2024-12-09T03:08:54.577Z] 3431.17 IOPS, 13.40 MiB/s [2024-12-09T03:08:55.948Z] 3432.29 IOPS, 13.41 MiB/s [2024-12-09T03:08:56.878Z] 3383.38 IOPS, 13.22 MiB/s [2024-12-09T03:08:57.812Z] 3394.33 IOPS, 13.26 MiB/s [2024-12-09T03:08:57.812Z] 3400.00 IOPS, 13.28 MiB/s 00:18:29.236 Latency(us) 00:18:29.236 [2024-12-09T03:08:57.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.236 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:29.236 Verification LBA range: start 0x0 length 0x2000 00:18:29.236 TLSTESTn1 : 10.03 3401.53 13.29 0.00 0.00 37552.26 6310.87 37282.70 00:18:29.236 [2024-12-09T03:08:57.812Z] =================================================================================================================== 00:18:29.236 [2024-12-09T03:08:57.812Z] Total : 3401.53 13.29 0.00 0.00 37552.26 6310.87 37282.70 00:18:29.236 { 00:18:29.236 "results": [ 00:18:29.236 { 00:18:29.236 "job": "TLSTESTn1", 00:18:29.236 "core_mask": "0x4", 00:18:29.236 "workload": "verify", 00:18:29.236 "status": "finished", 00:18:29.236 "verify_range": { 00:18:29.236 "start": 0, 00:18:29.236 "length": 8192 00:18:29.236 }, 00:18:29.236 "queue_depth": 128, 00:18:29.236 "io_size": 4096, 00:18:29.236 "runtime": 10.032532, 00:18:29.236 "iops": 3401.534129170981, 00:18:29.236 "mibps": 13.287242692074145, 00:18:29.236 "io_failed": 0, 00:18:29.236 "io_timeout": 0, 00:18:29.236 "avg_latency_us": 37552.259868331086, 00:18:29.236 "min_latency_us": 6310.874074074074, 00:18:29.236 "max_latency_us": 37282.70222222222 00:18:29.236 } 00:18:29.236 ], 00:18:29.236 "core_count": 1 00:18:29.236 } 00:18:29.236 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:29.236 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 252126 00:18:29.236 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 252126 ']' 00:18:29.236 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 252126 00:18:29.236 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:29.236 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:29.236 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 252126 00:18:29.236 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:29.236 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:29.236 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 252126' 00:18:29.236 killing process with pid 252126 00:18:29.236 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 252126 00:18:29.236 Received shutdown signal, test time was about 10.000000 seconds 00:18:29.236 00:18:29.236 Latency(us) 00:18:29.236 [2024-12-09T03:08:57.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.236 [2024-12-09T03:08:57.812Z] =================================================================================================================== 00:18:29.236 [2024-12-09T03:08:57.812Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:29.236 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 252126 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.fbnd8laNn2 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fbnd8laNn2 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fbnd8laNn2 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fbnd8laNn2 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.fbnd8laNn2 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=253428 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 253428 /var/tmp/bdevperf.sock 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 253428 ']' 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:29.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:29.494 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:29.495 04:08:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:29.495 [2024-12-09 04:08:57.899884] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:18:29.495 [2024-12-09 04:08:57.899974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid253428 ] 00:18:29.495 [2024-12-09 04:08:57.974359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.495 [2024-12-09 04:08:58.035106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:29.752 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:29.752 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:29.752 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fbnd8laNn2 00:18:30.010 [2024-12-09 04:08:58.415134] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fbnd8laNn2': 0100666 00:18:30.010 [2024-12-09 04:08:58.415176] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:30.010 request: 00:18:30.010 { 00:18:30.010 "name": "key0", 00:18:30.010 "path": "/tmp/tmp.fbnd8laNn2", 00:18:30.010 "method": "keyring_file_add_key", 00:18:30.010 "req_id": 1 00:18:30.010 } 00:18:30.010 Got JSON-RPC error response 00:18:30.010 response: 00:18:30.010 { 00:18:30.010 "code": -1, 00:18:30.010 "message": "Operation not permitted" 00:18:30.010 } 00:18:30.010 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:30.267 [2024-12-09 04:08:58.679947] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:30.267 [2024-12-09 04:08:58.680013] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:30.267 request: 00:18:30.267 { 00:18:30.267 "name": "TLSTEST", 00:18:30.267 "trtype": "tcp", 00:18:30.267 "traddr": "10.0.0.2", 00:18:30.267 "adrfam": "ipv4", 00:18:30.267 "trsvcid": "4420", 00:18:30.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:30.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:30.267 "prchk_reftag": false, 00:18:30.267 "prchk_guard": false, 00:18:30.267 "hdgst": false, 00:18:30.267 "ddgst": false, 00:18:30.267 "psk": "key0", 00:18:30.267 "allow_unrecognized_csi": false, 00:18:30.267 "method": "bdev_nvme_attach_controller", 00:18:30.267 "req_id": 1 00:18:30.267 } 00:18:30.267 Got JSON-RPC error response 00:18:30.267 response: 00:18:30.267 { 00:18:30.267 "code": -126, 00:18:30.267 "message": "Required key not available" 00:18:30.267 } 00:18:30.267 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 253428 00:18:30.267 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 253428 ']' 00:18:30.267 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 253428 00:18:30.267 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:30.267 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.267 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 253428 00:18:30.267 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:30.267 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:30.267 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 253428' 00:18:30.267 killing process with pid 253428 00:18:30.267 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 253428 00:18:30.267 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.267 00:18:30.267 Latency(us) 00:18:30.267 [2024-12-09T03:08:58.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.268 [2024-12-09T03:08:58.844Z] =================================================================================================================== 00:18:30.268 [2024-12-09T03:08:58.844Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:30.268 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 253428 00:18:30.525 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:30.525 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:30.525 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:30.525 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:30.525 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:30.525 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 251842 00:18:30.525 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 251842 ']' 00:18:30.525 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 251842 00:18:30.525 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:30.525 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.525 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 251842 00:18:30.525 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:30.525 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:30.525 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 251842' 00:18:30.525 killing process with pid 251842 00:18:30.525 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 251842 00:18:30.525 04:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 251842 00:18:30.782 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:18:30.782 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:30.782 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:30.782 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.782 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=253599 00:18:30.782 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:30.782 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 253599 00:18:30.782 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 253599 ']' 00:18:30.782 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.782 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.782 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.782 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.783 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.783 [2024-12-09 04:08:59.229859] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:18:30.783 [2024-12-09 04:08:59.229942] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:30.783 [2024-12-09 04:08:59.299375] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.041 [2024-12-09 04:08:59.359827] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.041 [2024-12-09 04:08:59.359879] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.041 [2024-12-09 04:08:59.359894] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.041 [2024-12-09 04:08:59.359915] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.041 [2024-12-09 04:08:59.359926] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.041 [2024-12-09 04:08:59.360596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.041 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.041 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:31.041 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:31.041 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:31.041 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.041 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.041 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.fbnd8laNn2 00:18:31.041 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:31.041 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.fbnd8laNn2 00:18:31.041 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:31.041 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.041 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:31.041 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:31.041 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.fbnd8laNn2 00:18:31.041 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fbnd8laNn2 00:18:31.041 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:31.299 [2024-12-09 04:08:59.809841] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.299 04:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:31.864 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:31.864 [2024-12-09 04:09:00.419585] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:31.864 [2024-12-09 04:09:00.419861] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.864 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:32.429 malloc0 00:18:32.429 04:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:32.686 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fbnd8laNn2 00:18:32.944 [2024-12-09 04:09:01.284903] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.fbnd8laNn2': 0100666 00:18:32.944 [2024-12-09 04:09:01.284945] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:32.944 request: 00:18:32.944 { 00:18:32.944 "name": "key0", 00:18:32.944 "path": "/tmp/tmp.fbnd8laNn2", 00:18:32.944 "method": "keyring_file_add_key", 00:18:32.944 "req_id": 1 00:18:32.944 } 00:18:32.944 Got JSON-RPC error response 00:18:32.944 response: 00:18:32.944 { 00:18:32.944 "code": -1, 00:18:32.944 "message": "Operation not permitted" 00:18:32.944 } 00:18:32.944 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:33.202 [2024-12-09 04:09:01.561671] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:33.202 [2024-12-09 04:09:01.561730] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:33.202 request: 00:18:33.202 { 00:18:33.202 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.202 "host": "nqn.2016-06.io.spdk:host1", 00:18:33.202 "psk": "key0", 00:18:33.202 "method": "nvmf_subsystem_add_host", 00:18:33.202 "req_id": 1 00:18:33.202 } 00:18:33.202 Got JSON-RPC error response 00:18:33.202 response: 00:18:33.202 { 00:18:33.202 "code": -32603, 00:18:33.202 "message": "Internal error" 00:18:33.202 } 00:18:33.202 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:33.202 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:33.202 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:33.202 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:33.202 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 253599 00:18:33.202 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 253599 ']' 00:18:33.202 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 253599 00:18:33.202 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:33.202 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.202 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 253599 00:18:33.202 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:33.202 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:33.202 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 253599' 00:18:33.202 killing process with pid 253599 00:18:33.202 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 253599 00:18:33.202 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 253599 00:18:33.460 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.fbnd8laNn2 00:18:33.460 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:18:33.460 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:33.460 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:33.460 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.460 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=254011 00:18:33.460 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:33.460 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 254011 00:18:33.460 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 254011 ']' 00:18:33.460 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.460 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:33.460 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.460 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:33.460 04:09:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.460 [2024-12-09 04:09:01.906361] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:18:33.460 [2024-12-09 04:09:01.906458] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:33.460 [2024-12-09 04:09:01.978861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.460 [2024-12-09 04:09:02.032407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.460 [2024-12-09 04:09:02.032480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.460 [2024-12-09 04:09:02.032505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.460 [2024-12-09 04:09:02.032517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.460 [2024-12-09 04:09:02.032527] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.460 [2024-12-09 04:09:02.033176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.718 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:33.718 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:33.718 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:33.718 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:33.718 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:33.718 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:33.718 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.fbnd8laNn2 00:18:33.718 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fbnd8laNn2 00:18:33.718 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:33.975 [2024-12-09 04:09:02.423922] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:33.975 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:34.233 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:34.490 [2024-12-09 04:09:02.961346] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:34.490 [2024-12-09 04:09:02.961591] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.490 04:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:34.747 malloc0 00:18:34.748 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:35.004 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fbnd8laNn2 00:18:35.262 04:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:35.520 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=254298 00:18:35.520 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:35.520 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:35.520 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 254298 /var/tmp/bdevperf.sock 00:18:35.520 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 254298 ']' 00:18:35.520 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.520 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.520 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.520 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.520 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.778 [2024-12-09 04:09:04.131236] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:18:35.778 [2024-12-09 04:09:04.131364] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid254298 ] 00:18:35.778 [2024-12-09 04:09:04.200093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.778 [2024-12-09 04:09:04.259095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:36.035 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:36.036 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:36.036 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fbnd8laNn2 00:18:36.294 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:36.551 [2024-12-09 04:09:04.911706] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:36.551 TLSTESTn1 00:18:36.551 04:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:36.808 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:18:36.808 "subsystems": [ 00:18:36.808 { 00:18:36.808 "subsystem": "keyring", 00:18:36.808 "config": [ 00:18:36.808 { 00:18:36.808 "method": "keyring_file_add_key", 00:18:36.808 "params": { 00:18:36.808 "name": "key0", 00:18:36.808 "path": "/tmp/tmp.fbnd8laNn2" 00:18:36.808 } 00:18:36.808 } 00:18:36.808 ] 00:18:36.808 }, 00:18:36.808 { 00:18:36.808 "subsystem": "iobuf", 00:18:36.808 "config": [ 00:18:36.808 { 00:18:36.808 "method": "iobuf_set_options", 00:18:36.808 "params": { 00:18:36.808 "small_pool_count": 8192, 00:18:36.808 "large_pool_count": 1024, 00:18:36.808 "small_bufsize": 8192, 00:18:36.808 "large_bufsize": 135168, 00:18:36.808 "enable_numa": false 00:18:36.808 } 00:18:36.808 } 00:18:36.808 ] 00:18:36.808 }, 00:18:36.808 { 00:18:36.808 "subsystem": "sock", 00:18:36.808 "config": [ 00:18:36.808 { 00:18:36.808 "method": "sock_set_default_impl", 00:18:36.808 "params": { 00:18:36.808 "impl_name": "posix" 00:18:36.808 } 00:18:36.808 }, 00:18:36.808 { 00:18:36.808 "method": "sock_impl_set_options", 00:18:36.808 "params": { 00:18:36.808 "impl_name": "ssl", 00:18:36.808 "recv_buf_size": 4096, 00:18:36.808 "send_buf_size": 4096, 00:18:36.808 "enable_recv_pipe": true, 00:18:36.808 "enable_quickack": false, 00:18:36.808 "enable_placement_id": 0, 00:18:36.808 "enable_zerocopy_send_server": true, 00:18:36.808 "enable_zerocopy_send_client": false, 00:18:36.808 "zerocopy_threshold": 0, 00:18:36.808 "tls_version": 0, 00:18:36.808 "enable_ktls": false 00:18:36.808 } 00:18:36.808 }, 00:18:36.808 { 00:18:36.808 "method": "sock_impl_set_options", 00:18:36.808 "params": { 00:18:36.808 "impl_name": "posix", 00:18:36.808 "recv_buf_size": 2097152, 00:18:36.808 "send_buf_size": 2097152, 00:18:36.808 "enable_recv_pipe": true, 00:18:36.808 "enable_quickack": false, 00:18:36.808 "enable_placement_id": 0, 00:18:36.808 "enable_zerocopy_send_server": true, 00:18:36.808 "enable_zerocopy_send_client": false, 00:18:36.808 "zerocopy_threshold": 0, 00:18:36.808 "tls_version": 0, 00:18:36.808 "enable_ktls": false 00:18:36.808 } 00:18:36.808 } 00:18:36.808 ] 00:18:36.808 }, 00:18:36.808 { 00:18:36.808 "subsystem": "vmd", 00:18:36.808 "config": [] 00:18:36.808 }, 00:18:36.808 { 00:18:36.808 "subsystem": "accel", 00:18:36.808 "config": [ 00:18:36.808 { 00:18:36.808 "method": "accel_set_options", 00:18:36.808 "params": { 00:18:36.808 "small_cache_size": 128, 00:18:36.808 "large_cache_size": 16, 00:18:36.808 "task_count": 2048, 00:18:36.808 "sequence_count": 2048, 00:18:36.808 "buf_count": 2048 00:18:36.808 } 00:18:36.808 } 00:18:36.808 ] 00:18:36.808 }, 00:18:36.808 { 00:18:36.808 "subsystem": "bdev", 00:18:36.808 "config": [ 00:18:36.808 { 00:18:36.808 "method": "bdev_set_options", 00:18:36.808 "params": { 00:18:36.808 "bdev_io_pool_size": 65535, 00:18:36.808 "bdev_io_cache_size": 256, 00:18:36.808 "bdev_auto_examine": true, 00:18:36.808 "iobuf_small_cache_size": 128, 00:18:36.808 "iobuf_large_cache_size": 16 00:18:36.808 } 00:18:36.808 }, 00:18:36.808 { 00:18:36.808 "method": "bdev_raid_set_options", 00:18:36.808 "params": { 00:18:36.808 "process_window_size_kb": 1024, 00:18:36.808 "process_max_bandwidth_mb_sec": 0 00:18:36.808 } 00:18:36.808 }, 00:18:36.808 { 00:18:36.808 "method": "bdev_iscsi_set_options", 00:18:36.808 "params": { 00:18:36.808 "timeout_sec": 30 00:18:36.808 } 00:18:36.808 }, 00:18:36.808 { 00:18:36.808 "method": "bdev_nvme_set_options", 00:18:36.808 "params": { 00:18:36.808 "action_on_timeout": "none", 00:18:36.808 "timeout_us": 0, 00:18:36.808 "timeout_admin_us": 0, 00:18:36.808 "keep_alive_timeout_ms": 10000, 00:18:36.808 "arbitration_burst": 0, 00:18:36.808 "low_priority_weight": 0, 00:18:36.808 "medium_priority_weight": 0, 00:18:36.808 "high_priority_weight": 0, 00:18:36.808 "nvme_adminq_poll_period_us": 10000, 00:18:36.808 "nvme_ioq_poll_period_us": 0, 00:18:36.808 "io_queue_requests": 0, 00:18:36.808 "delay_cmd_submit": true, 00:18:36.808 "transport_retry_count": 4, 00:18:36.808 "bdev_retry_count": 3, 00:18:36.808 "transport_ack_timeout": 0, 00:18:36.808 "ctrlr_loss_timeout_sec": 0, 00:18:36.808 "reconnect_delay_sec": 0, 00:18:36.808 "fast_io_fail_timeout_sec": 0, 00:18:36.808 "disable_auto_failback": false, 00:18:36.808 "generate_uuids": false, 00:18:36.808 "transport_tos": 0, 00:18:36.808 "nvme_error_stat": false, 00:18:36.808 "rdma_srq_size": 0, 00:18:36.808 "io_path_stat": false, 00:18:36.808 "allow_accel_sequence": false, 00:18:36.808 "rdma_max_cq_size": 0, 00:18:36.808 "rdma_cm_event_timeout_ms": 0, 00:18:36.808 "dhchap_digests": [ 00:18:36.808 "sha256", 00:18:36.808 "sha384", 00:18:36.808 "sha512" 00:18:36.808 ], 00:18:36.808 "dhchap_dhgroups": [ 00:18:36.808 "null", 00:18:36.808 "ffdhe2048", 00:18:36.808 "ffdhe3072", 00:18:36.808 "ffdhe4096", 00:18:36.808 "ffdhe6144", 00:18:36.808 "ffdhe8192" 00:18:36.808 ] 00:18:36.808 } 00:18:36.808 }, 00:18:36.808 { 00:18:36.808 "method": "bdev_nvme_set_hotplug", 00:18:36.808 "params": { 00:18:36.808 "period_us": 100000, 00:18:36.808 "enable": false 00:18:36.808 } 00:18:36.808 }, 00:18:36.808 { 00:18:36.808 "method": "bdev_malloc_create", 00:18:36.808 "params": { 00:18:36.808 "name": "malloc0", 00:18:36.808 "num_blocks": 8192, 00:18:36.808 "block_size": 4096, 00:18:36.808 "physical_block_size": 4096, 00:18:36.808 "uuid": "8a7e72c0-3ebd-4675-a56e-ac193c25a21b", 00:18:36.808 "optimal_io_boundary": 0, 00:18:36.808 "md_size": 0, 00:18:36.808 "dif_type": 0, 00:18:36.808 "dif_is_head_of_md": false, 00:18:36.808 "dif_pi_format": 0 00:18:36.808 } 00:18:36.808 }, 00:18:36.808 { 00:18:36.808 "method": "bdev_wait_for_examine" 00:18:36.808 } 00:18:36.808 ] 00:18:36.808 }, 00:18:36.808 { 00:18:36.808 "subsystem": "nbd", 00:18:36.808 "config": [] 00:18:36.808 }, 00:18:36.808 { 00:18:36.808 "subsystem": "scheduler", 00:18:36.808 "config": [ 00:18:36.808 { 00:18:36.808 "method": "framework_set_scheduler", 00:18:36.808 "params": { 00:18:36.808 "name": "static" 00:18:36.808 } 00:18:36.808 } 00:18:36.808 ] 00:18:36.808 }, 00:18:36.808 { 00:18:36.808 "subsystem": "nvmf", 00:18:36.808 "config": [ 00:18:36.808 { 00:18:36.808 "method": "nvmf_set_config", 00:18:36.808 "params": { 00:18:36.808 "discovery_filter": "match_any", 00:18:36.808 "admin_cmd_passthru": { 00:18:36.808 "identify_ctrlr": false 00:18:36.808 }, 00:18:36.808 "dhchap_digests": [ 00:18:36.808 "sha256", 00:18:36.808 "sha384", 00:18:36.808 "sha512" 00:18:36.808 ], 00:18:36.808 "dhchap_dhgroups": [ 00:18:36.808 "null", 00:18:36.808 "ffdhe2048", 00:18:36.808 "ffdhe3072", 00:18:36.808 "ffdhe4096", 00:18:36.808 "ffdhe6144", 00:18:36.808 "ffdhe8192" 00:18:36.808 ] 00:18:36.808 } 00:18:36.808 }, 00:18:36.808 { 00:18:36.808 "method": "nvmf_set_max_subsystems", 00:18:36.808 "params": { 00:18:36.808 "max_subsystems": 1024 00:18:36.808 } 00:18:36.808 }, 00:18:36.808 { 00:18:36.808 "method": "nvmf_set_crdt", 00:18:36.808 "params": { 00:18:36.808 "crdt1": 0, 00:18:36.808 "crdt2": 0, 00:18:36.808 "crdt3": 0 00:18:36.808 } 00:18:36.808 }, 00:18:36.808 { 00:18:36.808 "method": "nvmf_create_transport", 00:18:36.808 "params": { 00:18:36.808 "trtype": "TCP", 00:18:36.808 "max_queue_depth": 128, 00:18:36.808 "max_io_qpairs_per_ctrlr": 127, 00:18:36.808 "in_capsule_data_size": 4096, 00:18:36.808 "max_io_size": 131072, 00:18:36.808 "io_unit_size": 131072, 00:18:36.808 "max_aq_depth": 128, 00:18:36.808 "num_shared_buffers": 511, 00:18:36.808 "buf_cache_size": 4294967295, 00:18:36.808 "dif_insert_or_strip": false, 00:18:36.808 "zcopy": false, 00:18:36.808 "c2h_success": false, 00:18:36.808 "sock_priority": 0, 00:18:36.808 "abort_timeout_sec": 1, 00:18:36.808 "ack_timeout": 0, 00:18:36.808 "data_wr_pool_size": 0 00:18:36.808 } 00:18:36.808 }, 00:18:36.808 { 00:18:36.808 "method": "nvmf_create_subsystem", 00:18:36.808 "params": { 00:18:36.808 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.808 "allow_any_host": false, 00:18:36.808 "serial_number": "SPDK00000000000001", 00:18:36.809 "model_number": "SPDK bdev Controller", 00:18:36.809 "max_namespaces": 10, 00:18:36.809 "min_cntlid": 1, 00:18:36.809 "max_cntlid": 65519, 00:18:36.809 "ana_reporting": false 00:18:36.809 } 00:18:36.809 }, 00:18:36.809 { 00:18:36.809 "method": "nvmf_subsystem_add_host", 00:18:36.809 "params": { 00:18:36.809 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.809 "host": "nqn.2016-06.io.spdk:host1", 00:18:36.809 "psk": "key0" 00:18:36.809 } 00:18:36.809 }, 00:18:36.809 { 00:18:36.809 "method": "nvmf_subsystem_add_ns", 00:18:36.809 "params": { 00:18:36.809 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.809 "namespace": { 00:18:36.809 "nsid": 1, 00:18:36.809 "bdev_name": "malloc0", 00:18:36.809 "nguid": "8A7E72C03EBD4675A56EAC193C25A21B", 00:18:36.809 "uuid": "8a7e72c0-3ebd-4675-a56e-ac193c25a21b", 00:18:36.809 "no_auto_visible": false 00:18:36.809 } 00:18:36.809 } 00:18:36.809 }, 00:18:36.809 { 00:18:36.809 "method": "nvmf_subsystem_add_listener", 00:18:36.809 "params": { 00:18:36.809 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:36.809 "listen_address": { 00:18:36.809 "trtype": "TCP", 00:18:36.809 "adrfam": "IPv4", 00:18:36.809 "traddr": "10.0.0.2", 00:18:36.809 "trsvcid": "4420" 00:18:36.809 }, 00:18:36.809 "secure_channel": true 00:18:36.809 } 00:18:36.809 } 00:18:36.809 ] 00:18:36.809 } 00:18:36.809 ] 00:18:36.809 }' 00:18:36.809 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:37.373 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:18:37.373 "subsystems": [ 00:18:37.373 { 00:18:37.373 "subsystem": "keyring", 00:18:37.373 "config": [ 00:18:37.373 { 00:18:37.373 "method": "keyring_file_add_key", 00:18:37.373 "params": { 00:18:37.373 "name": "key0", 00:18:37.373 "path": "/tmp/tmp.fbnd8laNn2" 00:18:37.373 } 00:18:37.373 } 00:18:37.373 ] 00:18:37.373 }, 00:18:37.373 { 00:18:37.373 "subsystem": "iobuf", 00:18:37.373 "config": [ 00:18:37.373 { 00:18:37.373 "method": "iobuf_set_options", 00:18:37.373 "params": { 00:18:37.373 "small_pool_count": 8192, 00:18:37.373 "large_pool_count": 1024, 00:18:37.373 "small_bufsize": 8192, 00:18:37.373 "large_bufsize": 135168, 00:18:37.373 "enable_numa": false 00:18:37.373 } 00:18:37.373 } 00:18:37.373 ] 00:18:37.373 }, 00:18:37.373 { 00:18:37.373 "subsystem": "sock", 00:18:37.373 "config": [ 00:18:37.373 { 00:18:37.373 "method": "sock_set_default_impl", 00:18:37.373 "params": { 00:18:37.373 "impl_name": "posix" 00:18:37.373 } 00:18:37.373 }, 00:18:37.373 { 00:18:37.373 "method": "sock_impl_set_options", 00:18:37.373 "params": { 00:18:37.373 "impl_name": "ssl", 00:18:37.373 "recv_buf_size": 4096, 00:18:37.373 "send_buf_size": 4096, 00:18:37.373 "enable_recv_pipe": true, 00:18:37.373 "enable_quickack": false, 00:18:37.373 "enable_placement_id": 0, 00:18:37.373 "enable_zerocopy_send_server": true, 00:18:37.373 "enable_zerocopy_send_client": false, 00:18:37.373 "zerocopy_threshold": 0, 00:18:37.373 "tls_version": 0, 00:18:37.373 "enable_ktls": false 00:18:37.373 } 00:18:37.373 }, 00:18:37.373 { 00:18:37.373 "method": "sock_impl_set_options", 00:18:37.373 "params": { 00:18:37.373 "impl_name": "posix", 00:18:37.373 "recv_buf_size": 2097152, 00:18:37.373 "send_buf_size": 2097152, 00:18:37.373 "enable_recv_pipe": true, 00:18:37.373 "enable_quickack": false, 00:18:37.373 "enable_placement_id": 0, 00:18:37.373 "enable_zerocopy_send_server": true, 00:18:37.373 "enable_zerocopy_send_client": false, 00:18:37.373 "zerocopy_threshold": 0, 00:18:37.373 "tls_version": 0, 00:18:37.373 "enable_ktls": false 00:18:37.373 } 00:18:37.373 } 00:18:37.373 ] 00:18:37.373 }, 00:18:37.373 { 00:18:37.373 "subsystem": "vmd", 00:18:37.373 "config": [] 00:18:37.373 }, 00:18:37.373 { 00:18:37.373 "subsystem": "accel", 00:18:37.373 "config": [ 00:18:37.373 { 00:18:37.373 "method": "accel_set_options", 00:18:37.373 "params": { 00:18:37.373 "small_cache_size": 128, 00:18:37.373 "large_cache_size": 16, 00:18:37.373 "task_count": 2048, 00:18:37.373 "sequence_count": 2048, 00:18:37.373 "buf_count": 2048 00:18:37.373 } 00:18:37.373 } 00:18:37.373 ] 00:18:37.373 }, 00:18:37.373 { 00:18:37.373 "subsystem": "bdev", 00:18:37.373 "config": [ 00:18:37.373 { 00:18:37.373 "method": "bdev_set_options", 00:18:37.373 "params": { 00:18:37.373 "bdev_io_pool_size": 65535, 00:18:37.373 "bdev_io_cache_size": 256, 00:18:37.373 "bdev_auto_examine": true, 00:18:37.373 "iobuf_small_cache_size": 128, 00:18:37.373 "iobuf_large_cache_size": 16 00:18:37.373 } 00:18:37.373 }, 00:18:37.373 { 00:18:37.373 "method": "bdev_raid_set_options", 00:18:37.373 "params": { 00:18:37.373 "process_window_size_kb": 1024, 00:18:37.373 "process_max_bandwidth_mb_sec": 0 00:18:37.373 } 00:18:37.373 }, 00:18:37.373 { 00:18:37.373 "method": "bdev_iscsi_set_options", 00:18:37.373 "params": { 00:18:37.373 "timeout_sec": 30 00:18:37.373 } 00:18:37.373 }, 00:18:37.373 { 00:18:37.373 "method": "bdev_nvme_set_options", 00:18:37.373 "params": { 00:18:37.373 "action_on_timeout": "none", 00:18:37.373 "timeout_us": 0, 00:18:37.373 "timeout_admin_us": 0, 00:18:37.373 "keep_alive_timeout_ms": 10000, 00:18:37.373 "arbitration_burst": 0, 00:18:37.373 "low_priority_weight": 0, 00:18:37.373 "medium_priority_weight": 0, 00:18:37.373 "high_priority_weight": 0, 00:18:37.373 "nvme_adminq_poll_period_us": 10000, 00:18:37.373 "nvme_ioq_poll_period_us": 0, 00:18:37.373 "io_queue_requests": 512, 00:18:37.373 "delay_cmd_submit": true, 00:18:37.373 "transport_retry_count": 4, 00:18:37.373 "bdev_retry_count": 3, 00:18:37.373 "transport_ack_timeout": 0, 00:18:37.373 "ctrlr_loss_timeout_sec": 0, 00:18:37.373 "reconnect_delay_sec": 0, 00:18:37.373 "fast_io_fail_timeout_sec": 0, 00:18:37.373 "disable_auto_failback": false, 00:18:37.373 "generate_uuids": false, 00:18:37.373 "transport_tos": 0, 00:18:37.373 "nvme_error_stat": false, 00:18:37.373 "rdma_srq_size": 0, 00:18:37.373 "io_path_stat": false, 00:18:37.373 "allow_accel_sequence": false, 00:18:37.373 "rdma_max_cq_size": 0, 00:18:37.373 "rdma_cm_event_timeout_ms": 0, 00:18:37.373 "dhchap_digests": [ 00:18:37.373 "sha256", 00:18:37.373 "sha384", 00:18:37.373 "sha512" 00:18:37.373 ], 00:18:37.373 "dhchap_dhgroups": [ 00:18:37.373 "null", 00:18:37.373 "ffdhe2048", 00:18:37.373 "ffdhe3072", 00:18:37.373 "ffdhe4096", 00:18:37.373 "ffdhe6144", 00:18:37.373 "ffdhe8192" 00:18:37.373 ] 00:18:37.373 } 00:18:37.373 }, 00:18:37.373 { 00:18:37.373 "method": "bdev_nvme_attach_controller", 00:18:37.373 "params": { 00:18:37.373 "name": "TLSTEST", 00:18:37.373 "trtype": "TCP", 00:18:37.373 "adrfam": "IPv4", 00:18:37.373 "traddr": "10.0.0.2", 00:18:37.373 "trsvcid": "4420", 00:18:37.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.373 "prchk_reftag": false, 00:18:37.373 "prchk_guard": false, 00:18:37.373 "ctrlr_loss_timeout_sec": 0, 00:18:37.373 "reconnect_delay_sec": 0, 00:18:37.373 "fast_io_fail_timeout_sec": 0, 00:18:37.373 "psk": "key0", 00:18:37.373 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:37.373 "hdgst": false, 00:18:37.373 "ddgst": false, 00:18:37.373 "multipath": "multipath" 00:18:37.373 } 00:18:37.373 }, 00:18:37.373 { 00:18:37.373 "method": "bdev_nvme_set_hotplug", 00:18:37.374 "params": { 00:18:37.374 "period_us": 100000, 00:18:37.374 "enable": false 00:18:37.374 } 00:18:37.374 }, 00:18:37.374 { 00:18:37.374 "method": "bdev_wait_for_examine" 00:18:37.374 } 00:18:37.374 ] 00:18:37.374 }, 00:18:37.374 { 00:18:37.374 "subsystem": "nbd", 00:18:37.374 "config": [] 00:18:37.374 } 00:18:37.374 ] 00:18:37.374 }' 00:18:37.374 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 254298 00:18:37.374 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 254298 ']' 00:18:37.374 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 254298 00:18:37.374 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:37.374 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.374 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 254298 00:18:37.374 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:37.374 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:37.374 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 254298' 00:18:37.374 killing process with pid 254298 00:18:37.374 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 254298 00:18:37.374 Received shutdown signal, test time was about 10.000000 seconds 00:18:37.374 00:18:37.374 Latency(us) 00:18:37.374 [2024-12-09T03:09:05.950Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.374 [2024-12-09T03:09:05.950Z] =================================================================================================================== 00:18:37.374 [2024-12-09T03:09:05.950Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:37.374 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 254298 00:18:37.374 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 254011 00:18:37.374 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 254011 ']' 00:18:37.374 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 254011 00:18:37.374 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:37.631 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.631 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 254011 00:18:37.631 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:37.631 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:37.631 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 254011' 00:18:37.631 killing process with pid 254011 00:18:37.631 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 254011 00:18:37.631 04:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 254011 00:18:37.889 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:37.889 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:37.889 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:37.889 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:18:37.889 "subsystems": [ 00:18:37.889 { 00:18:37.889 "subsystem": "keyring", 00:18:37.889 "config": [ 00:18:37.890 { 00:18:37.890 "method": "keyring_file_add_key", 00:18:37.890 "params": { 00:18:37.890 "name": "key0", 00:18:37.890 "path": "/tmp/tmp.fbnd8laNn2" 00:18:37.890 } 00:18:37.890 } 00:18:37.890 ] 00:18:37.890 }, 00:18:37.890 { 00:18:37.890 "subsystem": "iobuf", 00:18:37.890 "config": [ 00:18:37.890 { 00:18:37.890 "method": "iobuf_set_options", 00:18:37.890 "params": { 00:18:37.890 "small_pool_count": 8192, 00:18:37.890 "large_pool_count": 1024, 00:18:37.890 "small_bufsize": 8192, 00:18:37.890 "large_bufsize": 135168, 00:18:37.890 "enable_numa": false 00:18:37.890 } 00:18:37.890 } 00:18:37.890 ] 00:18:37.890 }, 00:18:37.890 { 00:18:37.890 "subsystem": "sock", 00:18:37.890 "config": [ 00:18:37.890 { 00:18:37.890 "method": "sock_set_default_impl", 00:18:37.890 "params": { 00:18:37.890 "impl_name": "posix" 00:18:37.890 } 00:18:37.890 }, 00:18:37.890 { 00:18:37.890 "method": "sock_impl_set_options", 00:18:37.890 "params": { 00:18:37.890 "impl_name": "ssl", 00:18:37.890 "recv_buf_size": 4096, 00:18:37.890 "send_buf_size": 4096, 00:18:37.890 "enable_recv_pipe": true, 00:18:37.890 "enable_quickack": false, 00:18:37.890 "enable_placement_id": 0, 00:18:37.890 "enable_zerocopy_send_server": true, 00:18:37.890 "enable_zerocopy_send_client": false, 00:18:37.890 "zerocopy_threshold": 0, 00:18:37.890 "tls_version": 0, 00:18:37.890 "enable_ktls": false 00:18:37.890 } 00:18:37.890 }, 00:18:37.890 { 00:18:37.890 "method": "sock_impl_set_options", 00:18:37.890 "params": { 00:18:37.890 "impl_name": "posix", 00:18:37.890 "recv_buf_size": 2097152, 00:18:37.890 "send_buf_size": 2097152, 00:18:37.890 "enable_recv_pipe": true, 00:18:37.890 "enable_quickack": false, 00:18:37.890 "enable_placement_id": 0, 00:18:37.890 "enable_zerocopy_send_server": true, 00:18:37.890 "enable_zerocopy_send_client": false, 00:18:37.890 "zerocopy_threshold": 0, 00:18:37.890 "tls_version": 0, 00:18:37.890 "enable_ktls": false 00:18:37.890 } 00:18:37.890 } 00:18:37.890 ] 00:18:37.890 }, 00:18:37.890 { 00:18:37.890 "subsystem": "vmd", 00:18:37.890 "config": [] 00:18:37.890 }, 00:18:37.890 { 00:18:37.890 "subsystem": "accel", 00:18:37.890 "config": [ 00:18:37.890 { 00:18:37.890 "method": "accel_set_options", 00:18:37.890 "params": { 00:18:37.890 "small_cache_size": 128, 00:18:37.890 "large_cache_size": 16, 00:18:37.890 "task_count": 2048, 00:18:37.890 "sequence_count": 2048, 00:18:37.890 "buf_count": 2048 00:18:37.890 } 00:18:37.890 } 00:18:37.890 ] 00:18:37.890 }, 00:18:37.890 { 00:18:37.890 "subsystem": "bdev", 00:18:37.890 "config": [ 00:18:37.890 { 00:18:37.890 "method": "bdev_set_options", 00:18:37.890 "params": { 00:18:37.890 "bdev_io_pool_size": 65535, 00:18:37.890 "bdev_io_cache_size": 256, 00:18:37.890 "bdev_auto_examine": true, 00:18:37.890 "iobuf_small_cache_size": 128, 00:18:37.890 "iobuf_large_cache_size": 16 00:18:37.890 } 00:18:37.890 }, 00:18:37.890 { 00:18:37.890 "method": "bdev_raid_set_options", 00:18:37.890 "params": { 00:18:37.890 "process_window_size_kb": 1024, 00:18:37.890 "process_max_bandwidth_mb_sec": 0 00:18:37.890 } 00:18:37.890 }, 00:18:37.890 { 00:18:37.890 "method": "bdev_iscsi_set_options", 00:18:37.890 "params": { 00:18:37.890 "timeout_sec": 30 00:18:37.890 } 00:18:37.890 }, 00:18:37.890 { 00:18:37.890 "method": "bdev_nvme_set_options", 00:18:37.890 "params": { 00:18:37.890 "action_on_timeout": "none", 00:18:37.890 "timeout_us": 0, 00:18:37.890 "timeout_admin_us": 0, 00:18:37.890 "keep_alive_timeout_ms": 10000, 00:18:37.890 "arbitration_burst": 0, 00:18:37.890 "low_priority_weight": 0, 00:18:37.890 "medium_priority_weight": 0, 00:18:37.890 "high_priority_weight": 0, 00:18:37.890 "nvme_adminq_poll_period_us": 10000, 00:18:37.890 "nvme_ioq_poll_period_us": 0, 00:18:37.890 "io_queue_requests": 0, 00:18:37.890 "delay_cmd_submit": true, 00:18:37.890 "transport_retry_count": 4, 00:18:37.890 "bdev_retry_count": 3, 00:18:37.890 "transport_ack_timeout": 0, 00:18:37.890 "ctrlr_loss_timeout_sec": 0, 00:18:37.890 "reconnect_delay_sec": 0, 00:18:37.890 "fast_io_fail_timeout_sec": 0, 00:18:37.890 "disable_auto_failback": false, 00:18:37.890 "generate_uuids": false, 00:18:37.890 "transport_tos": 0, 00:18:37.890 "nvme_error_stat": false, 00:18:37.890 "rdma_srq_size": 0, 00:18:37.890 "io_path_stat": false, 00:18:37.890 "allow_accel_sequence": false, 00:18:37.890 "rdma_max_cq_size": 0, 00:18:37.890 "rdma_cm_event_timeout_ms": 0, 00:18:37.890 "dhchap_digests": [ 00:18:37.890 "sha256", 00:18:37.890 "sha384", 00:18:37.890 "sha512" 00:18:37.890 ], 00:18:37.890 "dhchap_dhgroups": [ 00:18:37.890 "null", 00:18:37.890 "ffdhe2048", 00:18:37.890 "ffdhe3072", 00:18:37.890 "ffdhe4096", 00:18:37.890 "ffdhe6144", 00:18:37.890 "ffdhe8192" 00:18:37.890 ] 00:18:37.890 } 00:18:37.890 }, 00:18:37.890 { 00:18:37.890 "method": "bdev_nvme_set_hotplug", 00:18:37.890 "params": { 00:18:37.890 "period_us": 100000, 00:18:37.890 "enable": false 00:18:37.890 } 00:18:37.890 }, 00:18:37.890 { 00:18:37.890 "method": "bdev_malloc_create", 00:18:37.890 "params": { 00:18:37.890 "name": "malloc0", 00:18:37.890 "num_blocks": 8192, 00:18:37.890 "block_size": 4096, 00:18:37.890 "physical_block_size": 4096, 00:18:37.890 "uuid": "8a7e72c0-3ebd-4675-a56e-ac193c25a21b", 00:18:37.890 "optimal_io_boundary": 0, 00:18:37.890 "md_size": 0, 00:18:37.890 "dif_type": 0, 00:18:37.890 "dif_is_head_of_md": false, 00:18:37.890 "dif_pi_format": 0 00:18:37.890 } 00:18:37.890 }, 00:18:37.890 { 00:18:37.890 "method": "bdev_wait_for_examine" 00:18:37.890 } 00:18:37.890 ] 00:18:37.890 }, 00:18:37.890 { 00:18:37.890 "subsystem": "nbd", 00:18:37.890 "config": [] 00:18:37.890 }, 00:18:37.890 { 00:18:37.890 "subsystem": "scheduler", 00:18:37.890 "config": [ 00:18:37.890 { 00:18:37.890 "method": "framework_set_scheduler", 00:18:37.890 "params": { 00:18:37.890 "name": "static" 00:18:37.890 } 00:18:37.890 } 00:18:37.890 ] 00:18:37.890 }, 00:18:37.890 { 00:18:37.890 "subsystem": "nvmf", 00:18:37.890 "config": [ 00:18:37.890 { 00:18:37.890 "method": "nvmf_set_config", 00:18:37.890 "params": { 00:18:37.890 "discovery_filter": "match_any", 00:18:37.890 "admin_cmd_passthru": { 00:18:37.890 "identify_ctrlr": false 00:18:37.890 }, 00:18:37.890 "dhchap_digests": [ 00:18:37.890 "sha256", 00:18:37.890 "sha384", 00:18:37.890 "sha512" 00:18:37.890 ], 00:18:37.890 "dhchap_dhgroups": [ 00:18:37.890 "null", 00:18:37.890 "ffdhe2048", 00:18:37.890 "ffdhe3072", 00:18:37.890 "ffdhe4096", 00:18:37.890 "ffdhe6144", 00:18:37.890 "ffdhe8192" 00:18:37.890 ] 00:18:37.890 } 00:18:37.890 }, 00:18:37.890 { 00:18:37.890 "method": "nvmf_set_max_subsystems", 00:18:37.890 "params": { 00:18:37.890 "max_subsystems": 1024 00:18:37.890 } 00:18:37.890 }, 00:18:37.890 { 00:18:37.890 "method": "nvmf_set_crdt", 00:18:37.890 "params": { 00:18:37.890 "crdt1": 0, 00:18:37.890 "crdt2": 0, 00:18:37.890 "crdt3": 0 00:18:37.890 } 00:18:37.890 }, 00:18:37.890 { 00:18:37.890 "method": "nvmf_create_transport", 00:18:37.890 "params": { 00:18:37.890 "trtype": "TCP", 00:18:37.890 "max_queue_depth": 128, 00:18:37.891 "max_io_qpairs_per_ctrlr": 127, 00:18:37.891 "in_capsule_data_size": 4096, 00:18:37.891 "max_io_size": 131072, 00:18:37.891 "io_unit_size": 131072, 00:18:37.891 "max_aq_depth": 128, 00:18:37.891 "num_shared_buffers": 511, 00:18:37.891 "buf_cache_size": 4294967295, 00:18:37.891 "dif_insert_or_strip": false, 00:18:37.891 "zcopy": false, 00:18:37.891 "c2h_success": false, 00:18:37.891 "sock_priority": 0, 00:18:37.891 "abort_timeout_sec": 1, 00:18:37.891 "ack_timeout": 0, 00:18:37.891 "data_wr_pool_size": 0 00:18:37.891 } 00:18:37.891 }, 00:18:37.891 { 00:18:37.891 "method": "nvmf_create_subsystem", 00:18:37.891 "params": { 00:18:37.891 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.891 "allow_any_host": false, 00:18:37.891 "serial_number": "SPDK00000000000001", 00:18:37.891 "model_number": "SPDK bdev Controller", 00:18:37.891 "max_namespaces": 10, 00:18:37.891 "min_cntlid": 1, 00:18:37.891 "max_cntlid": 65519, 00:18:37.891 "ana_reporting": false 00:18:37.891 } 00:18:37.891 }, 00:18:37.891 { 00:18:37.891 "method": "nvmf_subsystem_add_host", 00:18:37.891 "params": { 00:18:37.891 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.891 "host": "nqn.2016-06.io.spdk:host1", 00:18:37.891 "psk": "key0" 00:18:37.891 } 00:18:37.891 }, 00:18:37.891 { 00:18:37.891 "method": "nvmf_subsystem_add_ns", 00:18:37.891 "params": { 00:18:37.891 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.891 "namespace": { 00:18:37.891 "nsid": 1, 00:18:37.891 "bdev_name": "malloc0", 00:18:37.891 "nguid": "8A7E72C03EBD4675A56EAC193C25A21B", 00:18:37.891 "uuid": "8a7e72c0-3ebd-4675-a56e-ac193c25a21b", 00:18:37.891 "no_auto_visible": false 00:18:37.891 } 00:18:37.891 } 00:18:37.891 }, 00:18:37.891 { 00:18:37.891 "method": "nvmf_subsystem_add_listener", 00:18:37.891 "params": { 00:18:37.891 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.891 "listen_address": { 00:18:37.891 "trtype": "TCP", 00:18:37.891 "adrfam": "IPv4", 00:18:37.891 "traddr": "10.0.0.2", 00:18:37.891 "trsvcid": "4420" 00:18:37.891 }, 00:18:37.891 "secure_channel": true 00:18:37.891 } 00:18:37.891 } 00:18:37.891 ] 00:18:37.891 } 00:18:37.891 ] 00:18:37.891 }' 00:18:37.891 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.891 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=254576 00:18:37.891 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:37.891 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 254576 00:18:37.891 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 254576 ']' 00:18:37.891 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.891 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.891 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.891 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.891 04:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.891 [2024-12-09 04:09:06.281727] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:18:37.891 [2024-12-09 04:09:06.281796] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:37.891 [2024-12-09 04:09:06.357091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:37.891 [2024-12-09 04:09:06.414463] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.891 [2024-12-09 04:09:06.414533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.891 [2024-12-09 04:09:06.414547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.891 [2024-12-09 04:09:06.414558] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.891 [2024-12-09 04:09:06.414568] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.891 [2024-12-09 04:09:06.415192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.148 [2024-12-09 04:09:06.654209] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.148 [2024-12-09 04:09:06.686232] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:38.148 [2024-12-09 04:09:06.686512] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.081 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.081 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:39.081 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:39.081 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:39.081 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.081 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.081 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=254888 00:18:39.081 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 254888 /var/tmp/bdevperf.sock 00:18:39.081 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 254888 ']' 00:18:39.081 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:39.081 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:39.081 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.081 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:18:39.081 "subsystems": [ 00:18:39.081 { 00:18:39.081 "subsystem": "keyring", 00:18:39.081 "config": [ 00:18:39.081 { 00:18:39.081 "method": "keyring_file_add_key", 00:18:39.081 "params": { 00:18:39.081 "name": "key0", 00:18:39.081 "path": "/tmp/tmp.fbnd8laNn2" 00:18:39.081 } 00:18:39.081 } 00:18:39.081 ] 00:18:39.081 }, 00:18:39.081 { 00:18:39.081 "subsystem": "iobuf", 00:18:39.081 "config": [ 00:18:39.081 { 00:18:39.081 "method": "iobuf_set_options", 00:18:39.081 "params": { 00:18:39.081 "small_pool_count": 8192, 00:18:39.081 "large_pool_count": 1024, 00:18:39.081 "small_bufsize": 8192, 00:18:39.081 "large_bufsize": 135168, 00:18:39.081 "enable_numa": false 00:18:39.082 } 00:18:39.082 } 00:18:39.082 ] 00:18:39.082 }, 00:18:39.082 { 00:18:39.082 "subsystem": "sock", 00:18:39.082 "config": [ 00:18:39.082 { 00:18:39.082 "method": "sock_set_default_impl", 00:18:39.082 "params": { 00:18:39.082 "impl_name": "posix" 00:18:39.082 } 00:18:39.082 }, 00:18:39.082 { 00:18:39.082 "method": "sock_impl_set_options", 00:18:39.082 "params": { 00:18:39.082 "impl_name": "ssl", 00:18:39.082 "recv_buf_size": 4096, 00:18:39.082 "send_buf_size": 4096, 00:18:39.082 "enable_recv_pipe": true, 00:18:39.082 "enable_quickack": false, 00:18:39.082 "enable_placement_id": 0, 00:18:39.082 "enable_zerocopy_send_server": true, 00:18:39.082 "enable_zerocopy_send_client": false, 00:18:39.082 "zerocopy_threshold": 0, 00:18:39.082 "tls_version": 0, 00:18:39.082 "enable_ktls": false 00:18:39.082 } 00:18:39.082 }, 00:18:39.082 { 00:18:39.082 "method": "sock_impl_set_options", 00:18:39.082 "params": { 00:18:39.082 "impl_name": "posix", 00:18:39.082 "recv_buf_size": 2097152, 00:18:39.082 "send_buf_size": 2097152, 00:18:39.082 "enable_recv_pipe": true, 00:18:39.082 "enable_quickack": false, 00:18:39.082 "enable_placement_id": 0, 00:18:39.082 "enable_zerocopy_send_server": true, 00:18:39.082 "enable_zerocopy_send_client": false, 00:18:39.082 "zerocopy_threshold": 0, 00:18:39.082 "tls_version": 0, 00:18:39.082 "enable_ktls": false 00:18:39.082 } 00:18:39.082 } 00:18:39.082 ] 00:18:39.082 }, 00:18:39.082 { 00:18:39.082 "subsystem": "vmd", 00:18:39.082 "config": [] 00:18:39.082 }, 00:18:39.082 { 00:18:39.082 "subsystem": "accel", 00:18:39.082 "config": [ 00:18:39.082 { 00:18:39.082 "method": "accel_set_options", 00:18:39.082 "params": { 00:18:39.082 "small_cache_size": 128, 00:18:39.082 "large_cache_size": 16, 00:18:39.082 "task_count": 2048, 00:18:39.082 "sequence_count": 2048, 00:18:39.082 "buf_count": 2048 00:18:39.082 } 00:18:39.082 } 00:18:39.082 ] 00:18:39.082 }, 00:18:39.082 { 00:18:39.082 "subsystem": "bdev", 00:18:39.082 "config": [ 00:18:39.082 { 00:18:39.082 "method": "bdev_set_options", 00:18:39.082 "params": { 00:18:39.082 "bdev_io_pool_size": 65535, 00:18:39.082 "bdev_io_cache_size": 256, 00:18:39.082 "bdev_auto_examine": true, 00:18:39.082 "iobuf_small_cache_size": 128, 00:18:39.082 "iobuf_large_cache_size": 16 00:18:39.082 } 00:18:39.082 }, 00:18:39.082 { 00:18:39.082 "method": "bdev_raid_set_options", 00:18:39.082 "params": { 00:18:39.082 "process_window_size_kb": 1024, 00:18:39.082 "process_max_bandwidth_mb_sec": 0 00:18:39.082 } 00:18:39.082 }, 00:18:39.082 { 00:18:39.082 "method": "bdev_iscsi_set_options", 00:18:39.082 "params": { 00:18:39.082 "timeout_sec": 30 00:18:39.082 } 00:18:39.082 }, 00:18:39.082 { 00:18:39.082 "method": "bdev_nvme_set_options", 00:18:39.082 "params": { 00:18:39.082 "action_on_timeout": "none", 00:18:39.082 "timeout_us": 0, 00:18:39.082 "timeout_admin_us": 0, 00:18:39.082 "keep_alive_timeout_ms": 10000, 00:18:39.082 "arbitration_burst": 0, 00:18:39.082 "low_priority_weight": 0, 00:18:39.082 "medium_priority_weight": 0, 00:18:39.082 "high_priority_weight": 0, 00:18:39.082 "nvme_adminq_poll_period_us": 10000, 00:18:39.082 "nvme_ioq_poll_period_us": 0, 00:18:39.082 "io_queue_requests": 512, 00:18:39.082 "delay_cmd_submit": true, 00:18:39.082 "transport_retry_count": 4, 00:18:39.082 "bdev_retry_count": 3, 00:18:39.082 "transport_ack_timeout": 0, 00:18:39.082 "ctrlr_loss_timeout_sec": 0, 00:18:39.082 "reconnect_delay_sec": 0, 00:18:39.082 "fast_io_fail_timeout_sec": 0, 00:18:39.082 "disable_auto_failback": false, 00:18:39.082 "generate_uuids": false, 00:18:39.082 "transport_tos": 0, 00:18:39.082 "nvme_error_stat": false, 00:18:39.082 "rdma_srq_size": 0, 00:18:39.082 "io_path_stat": false, 00:18:39.082 "allow_accel_sequence": false, 00:18:39.082 "rdma_max_cq_size": 0, 00:18:39.082 "rdma_cm_event_timeout_ms": 0, 00:18:39.082 "dhchap_digests": [ 00:18:39.082 "sha256", 00:18:39.082 "sha384", 00:18:39.082 "sha512" 00:18:39.082 ], 00:18:39.082 "dhchap_dhgroups": [ 00:18:39.082 "null", 00:18:39.082 "ffdhe2048", 00:18:39.082 "ffdhe3072", 00:18:39.082 "ffdhe4096", 00:18:39.082 "ffdhe6144", 00:18:39.082 "ffdhe8192" 00:18:39.082 ] 00:18:39.082 } 00:18:39.082 }, 00:18:39.082 { 00:18:39.082 "method": "bdev_nvme_attach_controller", 00:18:39.082 "params": { 00:18:39.082 "name": "TLSTEST", 00:18:39.082 "trtype": "TCP", 00:18:39.082 "adrfam": "IPv4", 00:18:39.082 "traddr": "10.0.0.2", 00:18:39.082 "trsvcid": "4420", 00:18:39.082 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:39.082 "prchk_reftag": false, 00:18:39.082 "prchk_guard": false, 00:18:39.082 "ctrlr_loss_timeout_sec": 0, 00:18:39.082 "reconnect_delay_sec": 0, 00:18:39.082 "fast_io_fail_timeout_sec": 0, 00:18:39.082 "psk": "key0", 00:18:39.082 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:39.082 "hdgst": false, 00:18:39.082 "ddgst": false, 00:18:39.082 "multipath": "multipath" 00:18:39.082 } 00:18:39.082 }, 00:18:39.082 { 00:18:39.082 "method": "bdev_nvme_set_hotplug", 00:18:39.082 "params": { 00:18:39.082 "period_us": 100000, 00:18:39.082 "enable": false 00:18:39.082 } 00:18:39.082 }, 00:18:39.082 { 00:18:39.082 "method": "bdev_wait_for_examine" 00:18:39.082 } 00:18:39.082 ] 00:18:39.082 }, 00:18:39.082 { 00:18:39.082 "subsystem": "nbd", 00:18:39.082 "config": [] 00:18:39.082 } 00:18:39.082 ] 00:18:39.082 }' 00:18:39.082 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:39.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:39.082 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.082 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.082 [2024-12-09 04:09:07.365827] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:18:39.082 [2024-12-09 04:09:07.365920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid254888 ] 00:18:39.082 [2024-12-09 04:09:07.432919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.082 [2024-12-09 04:09:07.490389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.340 [2024-12-09 04:09:07.668034] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:39.340 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.340 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:39.340 04:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:39.340 Running I/O for 10 seconds... 00:18:41.646 3219.00 IOPS, 12.57 MiB/s [2024-12-09T03:09:11.154Z] 3312.50 IOPS, 12.94 MiB/s [2024-12-09T03:09:12.085Z] 3271.67 IOPS, 12.78 MiB/s [2024-12-09T03:09:13.017Z] 3310.00 IOPS, 12.93 MiB/s [2024-12-09T03:09:13.949Z] 3347.00 IOPS, 13.07 MiB/s [2024-12-09T03:09:15.320Z] 3354.83 IOPS, 13.10 MiB/s [2024-12-09T03:09:16.251Z] 3365.43 IOPS, 13.15 MiB/s [2024-12-09T03:09:17.182Z] 3372.88 IOPS, 13.18 MiB/s [2024-12-09T03:09:18.115Z] 3375.56 IOPS, 13.19 MiB/s [2024-12-09T03:09:18.115Z] 3373.70 IOPS, 13.18 MiB/s 00:18:49.539 Latency(us) 00:18:49.539 [2024-12-09T03:09:18.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.539 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:49.539 Verification LBA range: start 0x0 length 0x2000 00:18:49.539 TLSTESTn1 : 10.03 3375.70 13.19 0.00 0.00 37840.89 8932.31 51263.72 00:18:49.539 [2024-12-09T03:09:18.115Z] =================================================================================================================== 00:18:49.539 [2024-12-09T03:09:18.115Z] Total : 3375.70 13.19 0.00 0.00 37840.89 8932.31 51263.72 00:18:49.539 { 00:18:49.539 "results": [ 00:18:49.539 { 00:18:49.539 "job": "TLSTESTn1", 00:18:49.539 "core_mask": "0x4", 00:18:49.539 "workload": "verify", 00:18:49.539 "status": "finished", 00:18:49.539 "verify_range": { 00:18:49.539 "start": 0, 00:18:49.539 "length": 8192 00:18:49.539 }, 00:18:49.539 "queue_depth": 128, 00:18:49.539 "io_size": 4096, 00:18:49.539 "runtime": 10.03169, 00:18:49.539 "iops": 3375.7023990972607, 00:18:49.539 "mibps": 13.186337496473675, 00:18:49.539 "io_failed": 0, 00:18:49.539 "io_timeout": 0, 00:18:49.539 "avg_latency_us": 37840.89085719785, 00:18:49.539 "min_latency_us": 8932.314074074075, 00:18:49.539 "max_latency_us": 51263.71555555556 00:18:49.539 } 00:18:49.539 ], 00:18:49.539 "core_count": 1 00:18:49.539 } 00:18:49.539 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:49.539 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 254888 00:18:49.539 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 254888 ']' 00:18:49.539 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 254888 00:18:49.539 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.539 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.539 04:09:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 254888 00:18:49.539 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:49.539 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:49.539 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 254888' 00:18:49.539 killing process with pid 254888 00:18:49.539 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 254888 00:18:49.539 Received shutdown signal, test time was about 10.000000 seconds 00:18:49.539 00:18:49.539 Latency(us) 00:18:49.539 [2024-12-09T03:09:18.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.539 [2024-12-09T03:09:18.115Z] =================================================================================================================== 00:18:49.539 [2024-12-09T03:09:18.115Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.539 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 254888 00:18:49.796 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 254576 00:18:49.796 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 254576 ']' 00:18:49.796 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 254576 00:18:49.796 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.796 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.796 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 254576 00:18:49.796 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:49.796 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:49.796 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 254576' 00:18:49.796 killing process with pid 254576 00:18:49.796 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 254576 00:18:49.796 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 254576 00:18:50.054 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:18:50.054 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:50.054 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:50.055 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.055 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=256559 00:18:50.055 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:50.055 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 256559 00:18:50.055 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 256559 ']' 00:18:50.055 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.055 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.055 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.055 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.055 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.055 [2024-12-09 04:09:18.579459] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:18:50.055 [2024-12-09 04:09:18.579557] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.313 [2024-12-09 04:09:18.652755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.313 [2024-12-09 04:09:18.706297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.313 [2024-12-09 04:09:18.706364] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.313 [2024-12-09 04:09:18.706387] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.313 [2024-12-09 04:09:18.706398] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.313 [2024-12-09 04:09:18.706407] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.313 [2024-12-09 04:09:18.706948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.313 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.313 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:50.313 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:50.313 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:50.313 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.313 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.313 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.fbnd8laNn2 00:18:50.313 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.fbnd8laNn2 00:18:50.313 04:09:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:50.570 [2024-12-09 04:09:19.080516] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.570 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:50.827 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:51.084 [2024-12-09 04:09:19.629993] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:51.084 [2024-12-09 04:09:19.630266] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.084 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:51.341 malloc0 00:18:51.598 04:09:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:51.856 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.fbnd8laNn2 00:18:52.113 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:52.371 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=256850 00:18:52.371 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:52.371 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:52.371 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 256850 /var/tmp/bdevperf.sock 00:18:52.371 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 256850 ']' 00:18:52.371 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:52.371 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.371 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:52.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:52.371 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.371 04:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.371 [2024-12-09 04:09:20.806471] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:18:52.371 [2024-12-09 04:09:20.806557] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid256850 ] 00:18:52.371 [2024-12-09 04:09:20.876296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.371 [2024-12-09 04:09:20.935457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.628 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.629 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:52.629 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fbnd8laNn2 00:18:52.885 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:53.142 [2024-12-09 04:09:21.575936] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:53.142 nvme0n1 00:18:53.143 04:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:53.400 Running I/O for 1 seconds... 00:18:54.331 3230.00 IOPS, 12.62 MiB/s 00:18:54.331 Latency(us) 00:18:54.331 [2024-12-09T03:09:22.907Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.331 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:54.331 Verification LBA range: start 0x0 length 0x2000 00:18:54.331 nvme0n1 : 1.03 3269.99 12.77 0.00 0.00 38714.47 6310.87 44855.75 00:18:54.331 [2024-12-09T03:09:22.907Z] =================================================================================================================== 00:18:54.331 [2024-12-09T03:09:22.907Z] Total : 3269.99 12.77 0.00 0.00 38714.47 6310.87 44855.75 00:18:54.331 { 00:18:54.332 "results": [ 00:18:54.332 { 00:18:54.332 "job": "nvme0n1", 00:18:54.332 "core_mask": "0x2", 00:18:54.332 "workload": "verify", 00:18:54.332 "status": "finished", 00:18:54.332 "verify_range": { 00:18:54.332 "start": 0, 00:18:54.332 "length": 8192 00:18:54.332 }, 00:18:54.332 "queue_depth": 128, 00:18:54.332 "io_size": 4096, 00:18:54.332 "runtime": 1.026916, 00:18:54.332 "iops": 3269.9850815451314, 00:18:54.332 "mibps": 12.77337922478567, 00:18:54.332 "io_failed": 0, 00:18:54.332 "io_timeout": 0, 00:18:54.332 "avg_latency_us": 38714.47436878212, 00:18:54.332 "min_latency_us": 6310.874074074074, 00:18:54.332 "max_latency_us": 44855.75111111111 00:18:54.332 } 00:18:54.332 ], 00:18:54.332 "core_count": 1 00:18:54.332 } 00:18:54.332 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 256850 00:18:54.332 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 256850 ']' 00:18:54.332 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 256850 00:18:54.332 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:54.332 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.332 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 256850 00:18:54.332 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:54.332 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:54.332 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 256850' 00:18:54.332 killing process with pid 256850 00:18:54.332 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 256850 00:18:54.332 Received shutdown signal, test time was about 1.000000 seconds 00:18:54.332 00:18:54.332 Latency(us) 00:18:54.332 [2024-12-09T03:09:22.908Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:54.332 [2024-12-09T03:09:22.908Z] =================================================================================================================== 00:18:54.332 [2024-12-09T03:09:22.908Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:54.332 04:09:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 256850 00:18:54.589 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 256559 00:18:54.589 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 256559 ']' 00:18:54.589 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 256559 00:18:54.589 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:54.589 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:54.589 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 256559 00:18:54.589 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:54.589 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:54.589 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 256559' 00:18:54.589 killing process with pid 256559 00:18:54.589 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 256559 00:18:54.589 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 256559 00:18:54.848 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:54.848 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:54.848 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:54.848 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.848 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=257126 00:18:54.848 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:54.848 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 257126 00:18:54.848 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 257126 ']' 00:18:54.848 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.848 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.848 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:54.848 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.848 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.107 [2024-12-09 04:09:23.428958] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:18:55.107 [2024-12-09 04:09:23.429055] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.107 [2024-12-09 04:09:23.498332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.107 [2024-12-09 04:09:23.551068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.107 [2024-12-09 04:09:23.551117] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.107 [2024-12-09 04:09:23.551141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.107 [2024-12-09 04:09:23.551151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.107 [2024-12-09 04:09:23.551161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.107 [2024-12-09 04:09:23.551744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.107 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.107 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:55.107 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:55.107 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:55.107 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.364 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:55.364 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:55.364 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.364 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.364 [2024-12-09 04:09:23.686465] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:55.364 malloc0 00:18:55.364 [2024-12-09 04:09:23.717513] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:55.364 [2024-12-09 04:09:23.717806] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:55.364 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.364 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=257241 00:18:55.364 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:55.364 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 257241 /var/tmp/bdevperf.sock 00:18:55.364 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 257241 ']' 00:18:55.364 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.364 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:55.364 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.364 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:55.364 04:09:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:55.364 [2024-12-09 04:09:23.791461] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:18:55.364 [2024-12-09 04:09:23.791546] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid257241 ] 00:18:55.364 [2024-12-09 04:09:23.860069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.364 [2024-12-09 04:09:23.916766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.621 04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:55.621 04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:55.621 04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fbnd8laNn2 00:18:55.878 04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:56.135 [2024-12-09 04:09:24.529000] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:56.135 nvme0n1 00:18:56.135 04:09:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:56.401 Running I/O for 1 seconds... 00:18:57.339 3535.00 IOPS, 13.81 MiB/s 00:18:57.339 Latency(us) 00:18:57.339 [2024-12-09T03:09:25.915Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.339 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:57.339 Verification LBA range: start 0x0 length 0x2000 00:18:57.339 nvme0n1 : 1.02 3595.09 14.04 0.00 0.00 35263.62 6019.60 43496.49 00:18:57.339 [2024-12-09T03:09:25.915Z] =================================================================================================================== 00:18:57.339 [2024-12-09T03:09:25.915Z] Total : 3595.09 14.04 0.00 0.00 35263.62 6019.60 43496.49 00:18:57.339 { 00:18:57.339 "results": [ 00:18:57.339 { 00:18:57.339 "job": "nvme0n1", 00:18:57.339 "core_mask": "0x2", 00:18:57.339 "workload": "verify", 00:18:57.339 "status": "finished", 00:18:57.339 "verify_range": { 00:18:57.339 "start": 0, 00:18:57.339 "length": 8192 00:18:57.339 }, 00:18:57.339 "queue_depth": 128, 00:18:57.339 "io_size": 4096, 00:18:57.339 "runtime": 1.018891, 00:18:57.339 "iops": 3595.0852446434405, 00:18:57.339 "mibps": 14.04330173688844, 00:18:57.339 "io_failed": 0, 00:18:57.339 "io_timeout": 0, 00:18:57.339 "avg_latency_us": 35263.62076662521, 00:18:57.339 "min_latency_us": 6019.602962962963, 00:18:57.339 "max_latency_us": 43496.485925925925 00:18:57.339 } 00:18:57.339 ], 00:18:57.339 "core_count": 1 00:18:57.340 } 00:18:57.340 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:57.340 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.340 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:57.340 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.340 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:57.340 "subsystems": [ 00:18:57.340 { 00:18:57.340 "subsystem": "keyring", 00:18:57.340 "config": [ 00:18:57.340 { 00:18:57.340 "method": "keyring_file_add_key", 00:18:57.340 "params": { 00:18:57.340 "name": "key0", 00:18:57.340 "path": "/tmp/tmp.fbnd8laNn2" 00:18:57.340 } 00:18:57.340 } 00:18:57.340 ] 00:18:57.340 }, 00:18:57.340 { 00:18:57.340 "subsystem": "iobuf", 00:18:57.340 "config": [ 00:18:57.340 { 00:18:57.340 "method": "iobuf_set_options", 00:18:57.340 "params": { 00:18:57.340 "small_pool_count": 8192, 00:18:57.340 "large_pool_count": 1024, 00:18:57.340 "small_bufsize": 8192, 00:18:57.340 "large_bufsize": 135168, 00:18:57.340 "enable_numa": false 00:18:57.340 } 00:18:57.340 } 00:18:57.340 ] 00:18:57.340 }, 00:18:57.340 { 00:18:57.340 "subsystem": "sock", 00:18:57.340 "config": [ 00:18:57.340 { 00:18:57.340 "method": "sock_set_default_impl", 00:18:57.340 "params": { 00:18:57.340 "impl_name": "posix" 00:18:57.340 } 00:18:57.340 }, 00:18:57.340 { 00:18:57.340 "method": "sock_impl_set_options", 00:18:57.340 "params": { 00:18:57.340 "impl_name": "ssl", 00:18:57.340 "recv_buf_size": 4096, 00:18:57.340 "send_buf_size": 4096, 00:18:57.340 "enable_recv_pipe": true, 00:18:57.340 "enable_quickack": false, 00:18:57.340 "enable_placement_id": 0, 00:18:57.340 "enable_zerocopy_send_server": true, 00:18:57.340 "enable_zerocopy_send_client": false, 00:18:57.340 "zerocopy_threshold": 0, 00:18:57.340 "tls_version": 0, 00:18:57.340 "enable_ktls": false 00:18:57.340 } 00:18:57.340 }, 00:18:57.340 { 00:18:57.340 "method": "sock_impl_set_options", 00:18:57.340 "params": { 00:18:57.340 "impl_name": "posix", 00:18:57.340 "recv_buf_size": 2097152, 00:18:57.340 "send_buf_size": 2097152, 00:18:57.340 "enable_recv_pipe": true, 00:18:57.340 "enable_quickack": false, 00:18:57.340 "enable_placement_id": 0, 00:18:57.340 "enable_zerocopy_send_server": true, 00:18:57.340 "enable_zerocopy_send_client": false, 00:18:57.340 "zerocopy_threshold": 0, 00:18:57.340 "tls_version": 0, 00:18:57.340 "enable_ktls": false 00:18:57.340 } 00:18:57.340 } 00:18:57.340 ] 00:18:57.340 }, 00:18:57.340 { 00:18:57.340 "subsystem": "vmd", 00:18:57.340 "config": [] 00:18:57.340 }, 00:18:57.340 { 00:18:57.340 "subsystem": "accel", 00:18:57.340 "config": [ 00:18:57.340 { 00:18:57.340 "method": "accel_set_options", 00:18:57.340 "params": { 00:18:57.340 "small_cache_size": 128, 00:18:57.340 "large_cache_size": 16, 00:18:57.340 "task_count": 2048, 00:18:57.340 "sequence_count": 2048, 00:18:57.340 "buf_count": 2048 00:18:57.340 } 00:18:57.340 } 00:18:57.340 ] 00:18:57.340 }, 00:18:57.340 { 00:18:57.340 "subsystem": "bdev", 00:18:57.340 "config": [ 00:18:57.340 { 00:18:57.340 "method": "bdev_set_options", 00:18:57.340 "params": { 00:18:57.340 "bdev_io_pool_size": 65535, 00:18:57.340 "bdev_io_cache_size": 256, 00:18:57.340 "bdev_auto_examine": true, 00:18:57.340 "iobuf_small_cache_size": 128, 00:18:57.340 "iobuf_large_cache_size": 16 00:18:57.340 } 00:18:57.340 }, 00:18:57.340 { 00:18:57.340 "method": "bdev_raid_set_options", 00:18:57.340 "params": { 00:18:57.340 "process_window_size_kb": 1024, 00:18:57.340 "process_max_bandwidth_mb_sec": 0 00:18:57.340 } 00:18:57.340 }, 00:18:57.340 { 00:18:57.340 "method": "bdev_iscsi_set_options", 00:18:57.340 "params": { 00:18:57.340 "timeout_sec": 30 00:18:57.340 } 00:18:57.340 }, 00:18:57.340 { 00:18:57.340 "method": "bdev_nvme_set_options", 00:18:57.340 "params": { 00:18:57.340 "action_on_timeout": "none", 00:18:57.340 "timeout_us": 0, 00:18:57.340 "timeout_admin_us": 0, 00:18:57.340 "keep_alive_timeout_ms": 10000, 00:18:57.340 "arbitration_burst": 0, 00:18:57.340 "low_priority_weight": 0, 00:18:57.340 "medium_priority_weight": 0, 00:18:57.340 "high_priority_weight": 0, 00:18:57.340 "nvme_adminq_poll_period_us": 10000, 00:18:57.340 "nvme_ioq_poll_period_us": 0, 00:18:57.340 "io_queue_requests": 0, 00:18:57.340 "delay_cmd_submit": true, 00:18:57.340 "transport_retry_count": 4, 00:18:57.340 "bdev_retry_count": 3, 00:18:57.340 "transport_ack_timeout": 0, 00:18:57.340 "ctrlr_loss_timeout_sec": 0, 00:18:57.340 "reconnect_delay_sec": 0, 00:18:57.340 "fast_io_fail_timeout_sec": 0, 00:18:57.340 "disable_auto_failback": false, 00:18:57.340 "generate_uuids": false, 00:18:57.340 "transport_tos": 0, 00:18:57.340 "nvme_error_stat": false, 00:18:57.340 "rdma_srq_size": 0, 00:18:57.340 "io_path_stat": false, 00:18:57.340 "allow_accel_sequence": false, 00:18:57.340 "rdma_max_cq_size": 0, 00:18:57.340 "rdma_cm_event_timeout_ms": 0, 00:18:57.340 "dhchap_digests": [ 00:18:57.340 "sha256", 00:18:57.340 "sha384", 00:18:57.340 "sha512" 00:18:57.340 ], 00:18:57.340 "dhchap_dhgroups": [ 00:18:57.340 "null", 00:18:57.340 "ffdhe2048", 00:18:57.340 "ffdhe3072", 00:18:57.340 "ffdhe4096", 00:18:57.340 "ffdhe6144", 00:18:57.340 "ffdhe8192" 00:18:57.340 ] 00:18:57.340 } 00:18:57.340 }, 00:18:57.340 { 00:18:57.340 "method": "bdev_nvme_set_hotplug", 00:18:57.340 "params": { 00:18:57.340 "period_us": 100000, 00:18:57.340 "enable": false 00:18:57.340 } 00:18:57.340 }, 00:18:57.340 { 00:18:57.340 "method": "bdev_malloc_create", 00:18:57.340 "params": { 00:18:57.340 "name": "malloc0", 00:18:57.340 "num_blocks": 8192, 00:18:57.340 "block_size": 4096, 00:18:57.340 "physical_block_size": 4096, 00:18:57.340 "uuid": "228cf112-b03a-4aa4-ba00-71fd6183e66e", 00:18:57.340 "optimal_io_boundary": 0, 00:18:57.340 "md_size": 0, 00:18:57.340 "dif_type": 0, 00:18:57.340 "dif_is_head_of_md": false, 00:18:57.340 "dif_pi_format": 0 00:18:57.340 } 00:18:57.340 }, 00:18:57.340 { 00:18:57.340 "method": "bdev_wait_for_examine" 00:18:57.340 } 00:18:57.340 ] 00:18:57.340 }, 00:18:57.340 { 00:18:57.340 "subsystem": "nbd", 00:18:57.340 "config": [] 00:18:57.340 }, 00:18:57.340 { 00:18:57.340 "subsystem": "scheduler", 00:18:57.340 "config": [ 00:18:57.340 { 00:18:57.340 "method": "framework_set_scheduler", 00:18:57.340 "params": { 00:18:57.340 "name": "static" 00:18:57.340 } 00:18:57.340 } 00:18:57.340 ] 00:18:57.340 }, 00:18:57.340 { 00:18:57.340 "subsystem": "nvmf", 00:18:57.340 "config": [ 00:18:57.340 { 00:18:57.340 "method": "nvmf_set_config", 00:18:57.340 "params": { 00:18:57.340 "discovery_filter": "match_any", 00:18:57.340 "admin_cmd_passthru": { 00:18:57.340 "identify_ctrlr": false 00:18:57.340 }, 00:18:57.340 "dhchap_digests": [ 00:18:57.340 "sha256", 00:18:57.340 "sha384", 00:18:57.340 "sha512" 00:18:57.340 ], 00:18:57.340 "dhchap_dhgroups": [ 00:18:57.340 "null", 00:18:57.340 "ffdhe2048", 00:18:57.340 "ffdhe3072", 00:18:57.340 "ffdhe4096", 00:18:57.340 "ffdhe6144", 00:18:57.340 "ffdhe8192" 00:18:57.340 ] 00:18:57.340 } 00:18:57.340 }, 00:18:57.340 { 00:18:57.340 "method": "nvmf_set_max_subsystems", 00:18:57.340 "params": { 00:18:57.340 "max_subsystems": 1024 00:18:57.340 } 00:18:57.340 }, 00:18:57.340 { 00:18:57.340 "method": "nvmf_set_crdt", 00:18:57.340 "params": { 00:18:57.340 "crdt1": 0, 00:18:57.340 "crdt2": 0, 00:18:57.340 "crdt3": 0 00:18:57.340 } 00:18:57.340 }, 00:18:57.340 { 00:18:57.340 "method": "nvmf_create_transport", 00:18:57.340 "params": { 00:18:57.340 "trtype": "TCP", 00:18:57.340 "max_queue_depth": 128, 00:18:57.340 "max_io_qpairs_per_ctrlr": 127, 00:18:57.340 "in_capsule_data_size": 4096, 00:18:57.340 "max_io_size": 131072, 00:18:57.340 "io_unit_size": 131072, 00:18:57.340 "max_aq_depth": 128, 00:18:57.340 "num_shared_buffers": 511, 00:18:57.340 "buf_cache_size": 4294967295, 00:18:57.340 "dif_insert_or_strip": false, 00:18:57.340 "zcopy": false, 00:18:57.341 "c2h_success": false, 00:18:57.341 "sock_priority": 0, 00:18:57.341 "abort_timeout_sec": 1, 00:18:57.341 "ack_timeout": 0, 00:18:57.341 "data_wr_pool_size": 0 00:18:57.341 } 00:18:57.341 }, 00:18:57.341 { 00:18:57.341 "method": "nvmf_create_subsystem", 00:18:57.341 "params": { 00:18:57.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.341 "allow_any_host": false, 00:18:57.341 "serial_number": "00000000000000000000", 00:18:57.341 "model_number": "SPDK bdev Controller", 00:18:57.341 "max_namespaces": 32, 00:18:57.341 "min_cntlid": 1, 00:18:57.341 "max_cntlid": 65519, 00:18:57.341 "ana_reporting": false 00:18:57.341 } 00:18:57.341 }, 00:18:57.341 { 00:18:57.341 "method": "nvmf_subsystem_add_host", 00:18:57.341 "params": { 00:18:57.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.341 "host": "nqn.2016-06.io.spdk:host1", 00:18:57.341 "psk": "key0" 00:18:57.341 } 00:18:57.341 }, 00:18:57.341 { 00:18:57.341 "method": "nvmf_subsystem_add_ns", 00:18:57.341 "params": { 00:18:57.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.341 "namespace": { 00:18:57.341 "nsid": 1, 00:18:57.341 "bdev_name": "malloc0", 00:18:57.341 "nguid": "228CF112B03A4AA4BA0071FD6183E66E", 00:18:57.341 "uuid": "228cf112-b03a-4aa4-ba00-71fd6183e66e", 00:18:57.341 "no_auto_visible": false 00:18:57.341 } 00:18:57.341 } 00:18:57.341 }, 00:18:57.341 { 00:18:57.341 "method": "nvmf_subsystem_add_listener", 00:18:57.341 "params": { 00:18:57.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.341 "listen_address": { 00:18:57.341 "trtype": "TCP", 00:18:57.341 "adrfam": "IPv4", 00:18:57.341 "traddr": "10.0.0.2", 00:18:57.341 "trsvcid": "4420" 00:18:57.341 }, 00:18:57.341 "secure_channel": false, 00:18:57.341 "sock_impl": "ssl" 00:18:57.341 } 00:18:57.341 } 00:18:57.341 ] 00:18:57.341 } 00:18:57.341 ] 00:18:57.341 }' 00:18:57.341 04:09:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:57.906 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:57.906 "subsystems": [ 00:18:57.906 { 00:18:57.906 "subsystem": "keyring", 00:18:57.906 "config": [ 00:18:57.906 { 00:18:57.906 "method": "keyring_file_add_key", 00:18:57.906 "params": { 00:18:57.906 "name": "key0", 00:18:57.906 "path": "/tmp/tmp.fbnd8laNn2" 00:18:57.906 } 00:18:57.906 } 00:18:57.906 ] 00:18:57.906 }, 00:18:57.906 { 00:18:57.906 "subsystem": "iobuf", 00:18:57.906 "config": [ 00:18:57.906 { 00:18:57.906 "method": "iobuf_set_options", 00:18:57.906 "params": { 00:18:57.906 "small_pool_count": 8192, 00:18:57.906 "large_pool_count": 1024, 00:18:57.906 "small_bufsize": 8192, 00:18:57.906 "large_bufsize": 135168, 00:18:57.906 "enable_numa": false 00:18:57.906 } 00:18:57.906 } 00:18:57.906 ] 00:18:57.906 }, 00:18:57.906 { 00:18:57.906 "subsystem": "sock", 00:18:57.906 "config": [ 00:18:57.906 { 00:18:57.906 "method": "sock_set_default_impl", 00:18:57.906 "params": { 00:18:57.906 "impl_name": "posix" 00:18:57.906 } 00:18:57.906 }, 00:18:57.906 { 00:18:57.906 "method": "sock_impl_set_options", 00:18:57.906 "params": { 00:18:57.906 "impl_name": "ssl", 00:18:57.906 "recv_buf_size": 4096, 00:18:57.906 "send_buf_size": 4096, 00:18:57.906 "enable_recv_pipe": true, 00:18:57.906 "enable_quickack": false, 00:18:57.906 "enable_placement_id": 0, 00:18:57.906 "enable_zerocopy_send_server": true, 00:18:57.906 "enable_zerocopy_send_client": false, 00:18:57.906 "zerocopy_threshold": 0, 00:18:57.906 "tls_version": 0, 00:18:57.906 "enable_ktls": false 00:18:57.906 } 00:18:57.906 }, 00:18:57.906 { 00:18:57.906 "method": "sock_impl_set_options", 00:18:57.906 "params": { 00:18:57.906 "impl_name": "posix", 00:18:57.906 "recv_buf_size": 2097152, 00:18:57.906 "send_buf_size": 2097152, 00:18:57.906 "enable_recv_pipe": true, 00:18:57.906 "enable_quickack": false, 00:18:57.906 "enable_placement_id": 0, 00:18:57.906 "enable_zerocopy_send_server": true, 00:18:57.906 "enable_zerocopy_send_client": false, 00:18:57.906 "zerocopy_threshold": 0, 00:18:57.906 "tls_version": 0, 00:18:57.906 "enable_ktls": false 00:18:57.906 } 00:18:57.906 } 00:18:57.906 ] 00:18:57.906 }, 00:18:57.906 { 00:18:57.906 "subsystem": "vmd", 00:18:57.906 "config": [] 00:18:57.906 }, 00:18:57.906 { 00:18:57.906 "subsystem": "accel", 00:18:57.906 "config": [ 00:18:57.906 { 00:18:57.906 "method": "accel_set_options", 00:18:57.906 "params": { 00:18:57.906 "small_cache_size": 128, 00:18:57.906 "large_cache_size": 16, 00:18:57.906 "task_count": 2048, 00:18:57.906 "sequence_count": 2048, 00:18:57.906 "buf_count": 2048 00:18:57.906 } 00:18:57.906 } 00:18:57.906 ] 00:18:57.906 }, 00:18:57.906 { 00:18:57.906 "subsystem": "bdev", 00:18:57.906 "config": [ 00:18:57.906 { 00:18:57.906 "method": "bdev_set_options", 00:18:57.906 "params": { 00:18:57.906 "bdev_io_pool_size": 65535, 00:18:57.906 "bdev_io_cache_size": 256, 00:18:57.906 "bdev_auto_examine": true, 00:18:57.906 "iobuf_small_cache_size": 128, 00:18:57.906 "iobuf_large_cache_size": 16 00:18:57.906 } 00:18:57.906 }, 00:18:57.906 { 00:18:57.906 "method": "bdev_raid_set_options", 00:18:57.906 "params": { 00:18:57.906 "process_window_size_kb": 1024, 00:18:57.906 "process_max_bandwidth_mb_sec": 0 00:18:57.906 } 00:18:57.906 }, 00:18:57.906 { 00:18:57.906 "method": "bdev_iscsi_set_options", 00:18:57.906 "params": { 00:18:57.906 "timeout_sec": 30 00:18:57.906 } 00:18:57.906 }, 00:18:57.906 { 00:18:57.906 "method": "bdev_nvme_set_options", 00:18:57.906 "params": { 00:18:57.906 "action_on_timeout": "none", 00:18:57.906 "timeout_us": 0, 00:18:57.906 "timeout_admin_us": 0, 00:18:57.906 "keep_alive_timeout_ms": 10000, 00:18:57.906 "arbitration_burst": 0, 00:18:57.906 "low_priority_weight": 0, 00:18:57.906 "medium_priority_weight": 0, 00:18:57.906 "high_priority_weight": 0, 00:18:57.906 "nvme_adminq_poll_period_us": 10000, 00:18:57.906 "nvme_ioq_poll_period_us": 0, 00:18:57.906 "io_queue_requests": 512, 00:18:57.906 "delay_cmd_submit": true, 00:18:57.906 "transport_retry_count": 4, 00:18:57.906 "bdev_retry_count": 3, 00:18:57.906 "transport_ack_timeout": 0, 00:18:57.906 "ctrlr_loss_timeout_sec": 0, 00:18:57.906 "reconnect_delay_sec": 0, 00:18:57.906 "fast_io_fail_timeout_sec": 0, 00:18:57.906 "disable_auto_failback": false, 00:18:57.906 "generate_uuids": false, 00:18:57.906 "transport_tos": 0, 00:18:57.906 "nvme_error_stat": false, 00:18:57.906 "rdma_srq_size": 0, 00:18:57.906 "io_path_stat": false, 00:18:57.906 "allow_accel_sequence": false, 00:18:57.906 "rdma_max_cq_size": 0, 00:18:57.906 "rdma_cm_event_timeout_ms": 0, 00:18:57.906 "dhchap_digests": [ 00:18:57.906 "sha256", 00:18:57.906 "sha384", 00:18:57.906 "sha512" 00:18:57.906 ], 00:18:57.906 "dhchap_dhgroups": [ 00:18:57.906 "null", 00:18:57.906 "ffdhe2048", 00:18:57.906 "ffdhe3072", 00:18:57.906 "ffdhe4096", 00:18:57.906 "ffdhe6144", 00:18:57.906 "ffdhe8192" 00:18:57.906 ] 00:18:57.906 } 00:18:57.906 }, 00:18:57.906 { 00:18:57.906 "method": "bdev_nvme_attach_controller", 00:18:57.906 "params": { 00:18:57.906 "name": "nvme0", 00:18:57.906 "trtype": "TCP", 00:18:57.906 "adrfam": "IPv4", 00:18:57.906 "traddr": "10.0.0.2", 00:18:57.906 "trsvcid": "4420", 00:18:57.906 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:57.906 "prchk_reftag": false, 00:18:57.906 "prchk_guard": false, 00:18:57.906 "ctrlr_loss_timeout_sec": 0, 00:18:57.906 "reconnect_delay_sec": 0, 00:18:57.906 "fast_io_fail_timeout_sec": 0, 00:18:57.906 "psk": "key0", 00:18:57.906 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:57.906 "hdgst": false, 00:18:57.906 "ddgst": false, 00:18:57.906 "multipath": "multipath" 00:18:57.906 } 00:18:57.906 }, 00:18:57.906 { 00:18:57.906 "method": "bdev_nvme_set_hotplug", 00:18:57.906 "params": { 00:18:57.906 "period_us": 100000, 00:18:57.906 "enable": false 00:18:57.906 } 00:18:57.906 }, 00:18:57.906 { 00:18:57.906 "method": "bdev_enable_histogram", 00:18:57.906 "params": { 00:18:57.906 "name": "nvme0n1", 00:18:57.906 "enable": true 00:18:57.906 } 00:18:57.906 }, 00:18:57.906 { 00:18:57.906 "method": "bdev_wait_for_examine" 00:18:57.906 } 00:18:57.906 ] 00:18:57.906 }, 00:18:57.906 { 00:18:57.906 "subsystem": "nbd", 00:18:57.906 "config": [] 00:18:57.906 } 00:18:57.906 ] 00:18:57.906 }' 00:18:57.906 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 257241 00:18:57.906 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 257241 ']' 00:18:57.906 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 257241 00:18:57.906 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:57.906 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:57.906 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 257241 00:18:57.906 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:57.906 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:57.907 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 257241' 00:18:57.907 killing process with pid 257241 00:18:57.907 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 257241 00:18:57.907 Received shutdown signal, test time was about 1.000000 seconds 00:18:57.907 00:18:57.907 Latency(us) 00:18:57.907 [2024-12-09T03:09:26.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:57.907 [2024-12-09T03:09:26.483Z] =================================================================================================================== 00:18:57.907 [2024-12-09T03:09:26.483Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:57.907 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 257241 00:18:58.164 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 257126 00:18:58.164 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 257126 ']' 00:18:58.164 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 257126 00:18:58.164 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:58.164 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:58.164 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 257126 00:18:58.164 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:58.164 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:58.164 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 257126' 00:18:58.164 killing process with pid 257126 00:18:58.164 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 257126 00:18:58.164 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 257126 00:18:58.421 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:58.421 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:58.421 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:58.421 "subsystems": [ 00:18:58.421 { 00:18:58.421 "subsystem": "keyring", 00:18:58.421 "config": [ 00:18:58.421 { 00:18:58.421 "method": "keyring_file_add_key", 00:18:58.421 "params": { 00:18:58.421 "name": "key0", 00:18:58.421 "path": "/tmp/tmp.fbnd8laNn2" 00:18:58.421 } 00:18:58.421 } 00:18:58.421 ] 00:18:58.421 }, 00:18:58.421 { 00:18:58.421 "subsystem": "iobuf", 00:18:58.421 "config": [ 00:18:58.421 { 00:18:58.421 "method": "iobuf_set_options", 00:18:58.421 "params": { 00:18:58.421 "small_pool_count": 8192, 00:18:58.421 "large_pool_count": 1024, 00:18:58.421 "small_bufsize": 8192, 00:18:58.421 "large_bufsize": 135168, 00:18:58.421 "enable_numa": false 00:18:58.421 } 00:18:58.421 } 00:18:58.421 ] 00:18:58.421 }, 00:18:58.421 { 00:18:58.421 "subsystem": "sock", 00:18:58.421 "config": [ 00:18:58.421 { 00:18:58.421 "method": "sock_set_default_impl", 00:18:58.421 "params": { 00:18:58.421 "impl_name": "posix" 00:18:58.421 } 00:18:58.421 }, 00:18:58.421 { 00:18:58.421 "method": "sock_impl_set_options", 00:18:58.421 "params": { 00:18:58.421 "impl_name": "ssl", 00:18:58.421 "recv_buf_size": 4096, 00:18:58.421 "send_buf_size": 4096, 00:18:58.421 "enable_recv_pipe": true, 00:18:58.421 "enable_quickack": false, 00:18:58.421 "enable_placement_id": 0, 00:18:58.421 "enable_zerocopy_send_server": true, 00:18:58.421 "enable_zerocopy_send_client": false, 00:18:58.421 "zerocopy_threshold": 0, 00:18:58.421 "tls_version": 0, 00:18:58.421 "enable_ktls": false 00:18:58.421 } 00:18:58.421 }, 00:18:58.421 { 00:18:58.421 "method": "sock_impl_set_options", 00:18:58.421 "params": { 00:18:58.421 "impl_name": "posix", 00:18:58.421 "recv_buf_size": 2097152, 00:18:58.421 "send_buf_size": 2097152, 00:18:58.421 "enable_recv_pipe": true, 00:18:58.421 "enable_quickack": false, 00:18:58.421 "enable_placement_id": 0, 00:18:58.422 "enable_zerocopy_send_server": true, 00:18:58.422 "enable_zerocopy_send_client": false, 00:18:58.422 "zerocopy_threshold": 0, 00:18:58.422 "tls_version": 0, 00:18:58.422 "enable_ktls": false 00:18:58.422 } 00:18:58.422 } 00:18:58.422 ] 00:18:58.422 }, 00:18:58.422 { 00:18:58.422 "subsystem": "vmd", 00:18:58.422 "config": [] 00:18:58.422 }, 00:18:58.422 { 00:18:58.422 "subsystem": "accel", 00:18:58.422 "config": [ 00:18:58.422 { 00:18:58.422 "method": "accel_set_options", 00:18:58.422 "params": { 00:18:58.422 "small_cache_size": 128, 00:18:58.422 "large_cache_size": 16, 00:18:58.422 "task_count": 2048, 00:18:58.422 "sequence_count": 2048, 00:18:58.422 "buf_count": 2048 00:18:58.422 } 00:18:58.422 } 00:18:58.422 ] 00:18:58.422 }, 00:18:58.422 { 00:18:58.422 "subsystem": "bdev", 00:18:58.422 "config": [ 00:18:58.422 { 00:18:58.422 "method": "bdev_set_options", 00:18:58.422 "params": { 00:18:58.422 "bdev_io_pool_size": 65535, 00:18:58.422 "bdev_io_cache_size": 256, 00:18:58.422 "bdev_auto_examine": true, 00:18:58.422 "iobuf_small_cache_size": 128, 00:18:58.422 "iobuf_large_cache_size": 16 00:18:58.422 } 00:18:58.422 }, 00:18:58.422 { 00:18:58.422 "method": "bdev_raid_set_options", 00:18:58.422 "params": { 00:18:58.422 "process_window_size_kb": 1024, 00:18:58.422 "process_max_bandwidth_mb_sec": 0 00:18:58.422 } 00:18:58.422 }, 00:18:58.422 { 00:18:58.422 "method": "bdev_iscsi_set_options", 00:18:58.422 "params": { 00:18:58.422 "timeout_sec": 30 00:18:58.422 } 00:18:58.422 }, 00:18:58.422 { 00:18:58.422 "method": "bdev_nvme_set_options", 00:18:58.422 "params": { 00:18:58.422 "action_on_timeout": "none", 00:18:58.422 "timeout_us": 0, 00:18:58.422 "timeout_admin_us": 0, 00:18:58.422 "keep_alive_timeout_ms": 10000, 00:18:58.422 "arbitration_burst": 0, 00:18:58.422 "low_priority_weight": 0, 00:18:58.422 "medium_priority_weight": 0, 00:18:58.422 "high_priority_weight": 0, 00:18:58.422 "nvme_adminq_poll_period_us": 10000, 00:18:58.422 "nvme_ioq_poll_period_us": 0, 00:18:58.422 "io_queue_requests": 0, 00:18:58.422 "delay_cmd_submit": true, 00:18:58.422 "transport_retry_count": 4, 00:18:58.422 "bdev_retry_count": 3, 00:18:58.422 "transport_ack_timeout": 0, 00:18:58.422 "ctrlr_loss_timeout_sec": 0, 00:18:58.422 "reconnect_delay_sec": 0, 00:18:58.422 "fast_io_fail_timeout_sec": 0, 00:18:58.422 "disable_auto_failback": false, 00:18:58.422 "generate_uuids": false, 00:18:58.422 "transport_tos": 0, 00:18:58.422 "nvme_error_stat": false, 00:18:58.422 "rdma_srq_size": 0, 00:18:58.422 "io_path_stat": false, 00:18:58.422 "allow_accel_sequence": false, 00:18:58.422 "rdma_max_cq_size": 0, 00:18:58.422 "rdma_cm_event_timeout_ms": 0, 00:18:58.422 "dhchap_digests": [ 00:18:58.422 "sha256", 00:18:58.422 "sha384", 00:18:58.422 "sha512" 00:18:58.422 ], 00:18:58.422 "dhchap_dhgroups": [ 00:18:58.422 "null", 00:18:58.422 "ffdhe2048", 00:18:58.422 "ffdhe3072", 00:18:58.422 "ffdhe4096", 00:18:58.422 "ffdhe6144", 00:18:58.422 "ffdhe8192" 00:18:58.422 ] 00:18:58.422 } 00:18:58.422 }, 00:18:58.422 { 00:18:58.422 "method": "bdev_nvme_set_hotplug", 00:18:58.422 "params": { 00:18:58.422 "period_us": 100000, 00:18:58.422 "enable": false 00:18:58.422 } 00:18:58.422 }, 00:18:58.422 { 00:18:58.422 "method": "bdev_malloc_create", 00:18:58.422 "params": { 00:18:58.422 "name": "malloc0", 00:18:58.422 "num_blocks": 8192, 00:18:58.422 "block_size": 4096, 00:18:58.422 "physical_block_size": 4096, 00:18:58.422 "uuid": "228cf112-b03a-4aa4-ba00-71fd6183e66e", 00:18:58.422 "optimal_io_boundary": 0, 00:18:58.422 "md_size": 0, 00:18:58.422 "dif_type": 0, 00:18:58.422 "dif_is_head_of_md": false, 00:18:58.422 "dif_pi_format": 0 00:18:58.422 } 00:18:58.422 }, 00:18:58.422 { 00:18:58.422 "method": "bdev_wait_for_examine" 00:18:58.422 } 00:18:58.422 ] 00:18:58.422 }, 00:18:58.422 { 00:18:58.422 "subsystem": "nbd", 00:18:58.422 "config": [] 00:18:58.422 }, 00:18:58.422 { 00:18:58.422 "subsystem": "scheduler", 00:18:58.422 "config": [ 00:18:58.422 { 00:18:58.422 "method": "framework_set_scheduler", 00:18:58.422 "params": { 00:18:58.422 "name": "static" 00:18:58.422 } 00:18:58.422 } 00:18:58.422 ] 00:18:58.422 }, 00:18:58.422 { 00:18:58.422 "subsystem": "nvmf", 00:18:58.422 "config": [ 00:18:58.422 { 00:18:58.422 "method": "nvmf_set_config", 00:18:58.422 "params": { 00:18:58.422 "discovery_filter": "match_any", 00:18:58.422 "admin_cmd_passthru": { 00:18:58.422 "identify_ctrlr": false 00:18:58.422 }, 00:18:58.422 "dhchap_digests": [ 00:18:58.422 "sha256", 00:18:58.422 "sha384", 00:18:58.422 "sha512" 00:18:58.422 ], 00:18:58.422 "dhchap_dhgroups": [ 00:18:58.422 "null", 00:18:58.422 "ffdhe2048", 00:18:58.422 "ffdhe3072", 00:18:58.422 "ffdhe4096", 00:18:58.422 "ffdhe6144", 00:18:58.422 "ffdhe8192" 00:18:58.422 ] 00:18:58.422 } 00:18:58.422 }, 00:18:58.422 { 00:18:58.422 "method": "nvmf_set_max_subsystems", 00:18:58.422 "params": { 00:18:58.422 "max_subsystems": 1024 00:18:58.422 } 00:18:58.422 }, 00:18:58.422 { 00:18:58.422 "method": "nvmf_set_crdt", 00:18:58.422 "params": { 00:18:58.422 "crdt1": 0, 00:18:58.422 "crdt2": 0, 00:18:58.422 "crdt3": 0 00:18:58.422 } 00:18:58.422 }, 00:18:58.422 { 00:18:58.422 "method": "nvmf_create_transport", 00:18:58.422 "params": { 00:18:58.422 "trtype": "TCP", 00:18:58.422 "max_queue_depth": 128, 00:18:58.422 "max_io_qpairs_per_ctrlr": 127, 00:18:58.422 "in_capsule_data_size": 4096, 00:18:58.422 "max_io_size": 131072, 00:18:58.422 "io_unit_size": 131072, 00:18:58.422 "max_aq_depth": 128, 00:18:58.422 "num_shared_buffers": 511, 00:18:58.422 "buf_cache_size": 4294967295, 00:18:58.422 "dif_insert_or_strip": false, 00:18:58.422 "zcopy": false, 00:18:58.422 "c2h_success": false, 00:18:58.422 "sock_priority": 0, 00:18:58.422 "abort_timeout_sec": 1, 00:18:58.422 "ack_timeout": 0, 00:18:58.422 "data_wr_pool_size": 0 00:18:58.422 } 00:18:58.422 }, 00:18:58.422 { 00:18:58.422 "method": "nvmf_create_subsystem", 00:18:58.422 "params": { 00:18:58.422 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.422 "allow_any_host": false, 00:18:58.422 "serial_number": "00000000000000000000", 00:18:58.422 "model_number": "SPDK bdev Controller", 00:18:58.422 "max_namespaces": 32, 00:18:58.422 "min_cntlid": 1, 00:18:58.422 "max_cntlid": 65519, 00:18:58.422 "ana_reporting": false 00:18:58.422 } 00:18:58.422 }, 00:18:58.422 { 00:18:58.422 "method": "nvmf_subsystem_add_host", 00:18:58.422 "params": { 00:18:58.422 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.422 "host": "nqn.2016-06.io.spdk:host1", 00:18:58.422 "psk": "key0" 00:18:58.422 } 00:18:58.422 }, 00:18:58.422 { 00:18:58.422 "method": "nvmf_subsystem_add_ns", 00:18:58.422 "params": { 00:18:58.422 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.422 "namespace": { 00:18:58.422 "nsid": 1, 00:18:58.422 "bdev_name": "malloc0", 00:18:58.422 "nguid": "228CF112B03A4AA4BA0071FD6183E66E", 00:18:58.422 "uuid": "228cf112-b03a-4aa4-ba00-71fd6183e66e", 00:18:58.422 "no_auto_visible": false 00:18:58.422 } 00:18:58.422 } 00:18:58.422 }, 00:18:58.422 { 00:18:58.422 "method": "nvmf_subsystem_add_listener", 00:18:58.422 "params": { 00:18:58.422 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.422 "listen_address": { 00:18:58.422 "trtype": "TCP", 00:18:58.422 "adrfam": "IPv4", 00:18:58.422 "traddr": "10.0.0.2", 00:18:58.422 "trsvcid": "4420" 00:18:58.422 }, 00:18:58.422 "secure_channel": false, 00:18:58.422 "sock_impl": "ssl" 00:18:58.422 } 00:18:58.422 } 00:18:58.422 ] 00:18:58.422 } 00:18:58.422 ] 00:18:58.422 }' 00:18:58.422 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:58.422 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.422 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=257562 00:18:58.422 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:58.422 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 257562 00:18:58.422 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 257562 ']' 00:18:58.422 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:58.422 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.422 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:58.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:58.422 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.422 04:09:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.422 [2024-12-09 04:09:26.840708] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:18:58.423 [2024-12-09 04:09:26.840790] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:58.423 [2024-12-09 04:09:26.909885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.423 [2024-12-09 04:09:26.962516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:58.423 [2024-12-09 04:09:26.962583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:58.423 [2024-12-09 04:09:26.962596] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:58.423 [2024-12-09 04:09:26.962607] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:58.423 [2024-12-09 04:09:26.962625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:58.423 [2024-12-09 04:09:26.963224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.680 [2024-12-09 04:09:27.204812] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.680 [2024-12-09 04:09:27.236846] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:58.680 [2024-12-09 04:09:27.237073] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:58.938 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:58.938 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:58.938 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:58.938 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:58.938 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.938 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:58.938 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=257706 00:18:58.938 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 257706 /var/tmp/bdevperf.sock 00:18:58.938 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 257706 ']' 00:18:58.938 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.938 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:58.938 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:58.938 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.938 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:58.938 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:58.938 "subsystems": [ 00:18:58.938 { 00:18:58.938 "subsystem": "keyring", 00:18:58.938 "config": [ 00:18:58.938 { 00:18:58.938 "method": "keyring_file_add_key", 00:18:58.938 "params": { 00:18:58.938 "name": "key0", 00:18:58.938 "path": "/tmp/tmp.fbnd8laNn2" 00:18:58.938 } 00:18:58.938 } 00:18:58.938 ] 00:18:58.938 }, 00:18:58.938 { 00:18:58.938 "subsystem": "iobuf", 00:18:58.938 "config": [ 00:18:58.938 { 00:18:58.938 "method": "iobuf_set_options", 00:18:58.938 "params": { 00:18:58.938 "small_pool_count": 8192, 00:18:58.938 "large_pool_count": 1024, 00:18:58.938 "small_bufsize": 8192, 00:18:58.938 "large_bufsize": 135168, 00:18:58.938 "enable_numa": false 00:18:58.938 } 00:18:58.938 } 00:18:58.938 ] 00:18:58.938 }, 00:18:58.938 { 00:18:58.938 "subsystem": "sock", 00:18:58.938 "config": [ 00:18:58.938 { 00:18:58.938 "method": "sock_set_default_impl", 00:18:58.938 "params": { 00:18:58.938 "impl_name": "posix" 00:18:58.938 } 00:18:58.938 }, 00:18:58.938 { 00:18:58.938 "method": "sock_impl_set_options", 00:18:58.938 "params": { 00:18:58.938 "impl_name": "ssl", 00:18:58.938 "recv_buf_size": 4096, 00:18:58.938 "send_buf_size": 4096, 00:18:58.938 "enable_recv_pipe": true, 00:18:58.938 "enable_quickack": false, 00:18:58.938 "enable_placement_id": 0, 00:18:58.938 "enable_zerocopy_send_server": true, 00:18:58.938 "enable_zerocopy_send_client": false, 00:18:58.938 "zerocopy_threshold": 0, 00:18:58.938 "tls_version": 0, 00:18:58.938 "enable_ktls": false 00:18:58.938 } 00:18:58.938 }, 00:18:58.938 { 00:18:58.939 "method": "sock_impl_set_options", 00:18:58.939 "params": { 00:18:58.939 "impl_name": "posix", 00:18:58.939 "recv_buf_size": 2097152, 00:18:58.939 "send_buf_size": 2097152, 00:18:58.939 "enable_recv_pipe": true, 00:18:58.939 "enable_quickack": false, 00:18:58.939 "enable_placement_id": 0, 00:18:58.939 "enable_zerocopy_send_server": true, 00:18:58.939 "enable_zerocopy_send_client": false, 00:18:58.939 "zerocopy_threshold": 0, 00:18:58.939 "tls_version": 0, 00:18:58.939 "enable_ktls": false 00:18:58.939 } 00:18:58.939 } 00:18:58.939 ] 00:18:58.939 }, 00:18:58.939 { 00:18:58.939 "subsystem": "vmd", 00:18:58.939 "config": [] 00:18:58.939 }, 00:18:58.939 { 00:18:58.939 "subsystem": "accel", 00:18:58.939 "config": [ 00:18:58.939 { 00:18:58.939 "method": "accel_set_options", 00:18:58.939 "params": { 00:18:58.939 "small_cache_size": 128, 00:18:58.939 "large_cache_size": 16, 00:18:58.939 "task_count": 2048, 00:18:58.939 "sequence_count": 2048, 00:18:58.939 "buf_count": 2048 00:18:58.939 } 00:18:58.939 } 00:18:58.939 ] 00:18:58.939 }, 00:18:58.939 { 00:18:58.939 "subsystem": "bdev", 00:18:58.939 "config": [ 00:18:58.939 { 00:18:58.939 "method": "bdev_set_options", 00:18:58.939 "params": { 00:18:58.939 "bdev_io_pool_size": 65535, 00:18:58.939 "bdev_io_cache_size": 256, 00:18:58.939 "bdev_auto_examine": true, 00:18:58.939 "iobuf_small_cache_size": 128, 00:18:58.939 "iobuf_large_cache_size": 16 00:18:58.939 } 00:18:58.939 }, 00:18:58.939 { 00:18:58.939 "method": "bdev_raid_set_options", 00:18:58.939 "params": { 00:18:58.939 "process_window_size_kb": 1024, 00:18:58.939 "process_max_bandwidth_mb_sec": 0 00:18:58.939 } 00:18:58.939 }, 00:18:58.939 { 00:18:58.939 "method": "bdev_iscsi_set_options", 00:18:58.939 "params": { 00:18:58.939 "timeout_sec": 30 00:18:58.939 } 00:18:58.939 }, 00:18:58.939 { 00:18:58.939 "method": "bdev_nvme_set_options", 00:18:58.939 "params": { 00:18:58.939 "action_on_timeout": "none", 00:18:58.939 "timeout_us": 0, 00:18:58.939 "timeout_admin_us": 0, 00:18:58.939 "keep_alive_timeout_ms": 10000, 00:18:58.939 "arbitration_burst": 0, 00:18:58.939 "low_priority_weight": 0, 00:18:58.939 "medium_priority_weight": 0, 00:18:58.939 "high_priority_weight": 0, 00:18:58.939 "nvme_adminq_poll_period_us": 10000, 00:18:58.939 "nvme_ioq_poll_period_us": 0, 00:18:58.939 "io_queue_requests": 512, 00:18:58.939 "delay_cmd_submit": true, 00:18:58.939 "transport_retry_count": 4, 00:18:58.939 "bdev_retry_count": 3, 00:18:58.939 "transport_ack_timeout": 0, 00:18:58.939 "ctrlr_loss_timeout_sec": 0, 00:18:58.939 "reconnect_delay_sec": 0, 00:18:58.939 "fast_io_fail_timeout_sec": 0, 00:18:58.939 "disable_auto_failback": false, 00:18:58.939 "generate_uuids": false, 00:18:58.939 "transport_tos": 0, 00:18:58.939 "nvme_error_stat": false, 00:18:58.939 "rdma_srq_size": 0, 00:18:58.939 "io_path_stat": false, 00:18:58.939 "allow_accel_sequence": false, 00:18:58.939 "rdma_max_cq_size": 0, 00:18:58.939 "rdma_cm_event_timeout_ms": 0, 00:18:58.939 "dhchap_digests": [ 00:18:58.939 "sha256", 00:18:58.939 "sha384", 00:18:58.939 "sha512" 00:18:58.939 ], 00:18:58.939 "dhchap_dhgroups": [ 00:18:58.939 "null", 00:18:58.939 "ffdhe2048", 00:18:58.939 "ffdhe3072", 00:18:58.939 "ffdhe4096", 00:18:58.939 "ffdhe6144", 00:18:58.939 "ffdhe8192" 00:18:58.939 ] 00:18:58.939 } 00:18:58.939 }, 00:18:58.939 { 00:18:58.939 "method": "bdev_nvme_attach_controller", 00:18:58.939 "params": { 00:18:58.939 "name": "nvme0", 00:18:58.939 "trtype": "TCP", 00:18:58.939 "adrfam": "IPv4", 00:18:58.939 "traddr": "10.0.0.2", 00:18:58.939 "trsvcid": "4420", 00:18:58.939 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:58.939 "prchk_reftag": false, 00:18:58.939 "prchk_guard": false, 00:18:58.939 "ctrlr_loss_timeout_sec": 0, 00:18:58.939 "reconnect_delay_sec": 0, 00:18:58.939 "fast_io_fail_timeout_sec": 0, 00:18:58.939 "psk": "key0", 00:18:58.939 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:58.939 "hdgst": false, 00:18:58.939 "ddgst": false, 00:18:58.939 "multipath": "multipath" 00:18:58.939 } 00:18:58.939 }, 00:18:58.939 { 00:18:58.939 "method": "bdev_nvme_set_hotplug", 00:18:58.939 "params": { 00:18:58.939 "period_us": 100000, 00:18:58.939 "enable": false 00:18:58.939 } 00:18:58.939 }, 00:18:58.939 { 00:18:58.939 "method": "bdev_enable_histogram", 00:18:58.939 "params": { 00:18:58.939 "name": "nvme0n1", 00:18:58.939 "enable": true 00:18:58.939 } 00:18:58.939 }, 00:18:58.939 { 00:18:58.939 "method": "bdev_wait_for_examine" 00:18:58.939 } 00:18:58.939 ] 00:18:58.939 }, 00:18:58.939 { 00:18:58.939 "subsystem": "nbd", 00:18:58.939 "config": [] 00:18:58.939 } 00:18:58.939 ] 00:18:58.939 }' 00:18:58.939 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:58.939 [2024-12-09 04:09:27.328238] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:18:58.939 [2024-12-09 04:09:27.328343] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid257706 ] 00:18:58.939 [2024-12-09 04:09:27.394535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.939 [2024-12-09 04:09:27.452893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:59.198 [2024-12-09 04:09:27.630100] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.198 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:59.198 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:59.198 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:59.198 04:09:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:59.456 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.456 04:09:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:59.714 Running I/O for 1 seconds... 00:19:00.645 3247.00 IOPS, 12.68 MiB/s 00:19:00.645 Latency(us) 00:19:00.645 [2024-12-09T03:09:29.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.645 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:00.645 Verification LBA range: start 0x0 length 0x2000 00:19:00.645 nvme0n1 : 1.02 3305.60 12.91 0.00 0.00 38352.73 6553.60 80390.83 00:19:00.645 [2024-12-09T03:09:29.221Z] =================================================================================================================== 00:19:00.645 [2024-12-09T03:09:29.221Z] Total : 3305.60 12.91 0.00 0.00 38352.73 6553.60 80390.83 00:19:00.645 { 00:19:00.645 "results": [ 00:19:00.645 { 00:19:00.645 "job": "nvme0n1", 00:19:00.645 "core_mask": "0x2", 00:19:00.645 "workload": "verify", 00:19:00.645 "status": "finished", 00:19:00.645 "verify_range": { 00:19:00.645 "start": 0, 00:19:00.645 "length": 8192 00:19:00.645 }, 00:19:00.645 "queue_depth": 128, 00:19:00.645 "io_size": 4096, 00:19:00.645 "runtime": 1.020996, 00:19:00.645 "iops": 3305.595712421988, 00:19:00.645 "mibps": 12.912483251648391, 00:19:00.645 "io_failed": 0, 00:19:00.645 "io_timeout": 0, 00:19:00.645 "avg_latency_us": 38352.731598134436, 00:19:00.645 "min_latency_us": 6553.6, 00:19:00.645 "max_latency_us": 80390.82666666666 00:19:00.645 } 00:19:00.645 ], 00:19:00.645 "core_count": 1 00:19:00.645 } 00:19:00.645 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:00.645 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:00.646 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:00.646 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:00.646 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:00.646 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:00.646 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:00.646 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:00.646 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:00.646 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:00.646 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:00.646 nvmf_trace.0 00:19:00.903 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:00.903 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 257706 00:19:00.903 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 257706 ']' 00:19:00.903 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 257706 00:19:00.903 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:00.903 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.903 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 257706 00:19:00.903 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:00.903 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:00.903 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 257706' 00:19:00.903 killing process with pid 257706 00:19:00.903 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 257706 00:19:00.903 Received shutdown signal, test time was about 1.000000 seconds 00:19:00.903 00:19:00.903 Latency(us) 00:19:00.903 [2024-12-09T03:09:29.479Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.903 [2024-12-09T03:09:29.479Z] =================================================================================================================== 00:19:00.903 [2024-12-09T03:09:29.479Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:00.903 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 257706 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:01.160 rmmod nvme_tcp 00:19:01.160 rmmod nvme_fabrics 00:19:01.160 rmmod nvme_keyring 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 257562 ']' 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 257562 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 257562 ']' 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 257562 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 257562 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 257562' 00:19:01.160 killing process with pid 257562 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 257562 00:19:01.160 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 257562 00:19:01.419 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:01.419 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:01.419 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:01.419 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:01.419 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:01.419 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:01.419 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:01.419 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:01.420 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:01.420 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.420 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.420 04:09:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.321 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:03.321 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.ApVRvtgogO /tmp/tmp.lhfP9KGslT /tmp/tmp.fbnd8laNn2 00:19:03.321 00:19:03.321 real 1m22.393s 00:19:03.321 user 2m19.648s 00:19:03.321 sys 0m23.808s 00:19:03.321 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:03.321 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:03.321 ************************************ 00:19:03.321 END TEST nvmf_tls 00:19:03.321 ************************************ 00:19:03.579 04:09:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:03.580 04:09:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:03.580 04:09:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:03.580 04:09:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:03.580 ************************************ 00:19:03.580 START TEST nvmf_fips 00:19:03.580 ************************************ 00:19:03.580 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:03.580 * Looking for test storage... 00:19:03.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:03.580 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:03.580 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:19:03.580 04:09:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:03.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.580 --rc genhtml_branch_coverage=1 00:19:03.580 --rc genhtml_function_coverage=1 00:19:03.580 --rc genhtml_legend=1 00:19:03.580 --rc geninfo_all_blocks=1 00:19:03.580 --rc geninfo_unexecuted_blocks=1 00:19:03.580 00:19:03.580 ' 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:03.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.580 --rc genhtml_branch_coverage=1 00:19:03.580 --rc genhtml_function_coverage=1 00:19:03.580 --rc genhtml_legend=1 00:19:03.580 --rc geninfo_all_blocks=1 00:19:03.580 --rc geninfo_unexecuted_blocks=1 00:19:03.580 00:19:03.580 ' 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:03.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.580 --rc genhtml_branch_coverage=1 00:19:03.580 --rc genhtml_function_coverage=1 00:19:03.580 --rc genhtml_legend=1 00:19:03.580 --rc geninfo_all_blocks=1 00:19:03.580 --rc geninfo_unexecuted_blocks=1 00:19:03.580 00:19:03.580 ' 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:03.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.580 --rc genhtml_branch_coverage=1 00:19:03.580 --rc genhtml_function_coverage=1 00:19:03.580 --rc genhtml_legend=1 00:19:03.580 --rc geninfo_all_blocks=1 00:19:03.580 --rc geninfo_unexecuted_blocks=1 00:19:03.580 00:19:03.580 ' 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.580 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:03.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:03.581 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:03.839 Error setting digest 00:19:03.839 400230F4557F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:03.839 400230F4557F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:03.839 04:09:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:06.368 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:06.368 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:06.368 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:06.369 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:06.369 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:06.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:19:06.369 00:19:06.369 --- 10.0.0.2 ping statistics --- 00:19:06.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.369 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:06.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:19:06.369 00:19:06.369 --- 10.0.0.1 ping statistics --- 00:19:06.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.369 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=259943 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 259943 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 259943 ']' 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:06.369 [2024-12-09 04:09:34.605577] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:19:06.369 [2024-12-09 04:09:34.605682] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.369 [2024-12-09 04:09:34.676158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.369 [2024-12-09 04:09:34.731768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.369 [2024-12-09 04:09:34.731828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.369 [2024-12-09 04:09:34.731852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.369 [2024-12-09 04:09:34.731870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.369 [2024-12-09 04:09:34.731880] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.369 [2024-12-09 04:09:34.732430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.hqk 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.hqk 00:19:06.369 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.hqk 00:19:06.370 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.hqk 00:19:06.370 04:09:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:06.627 [2024-12-09 04:09:35.168728] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.627 [2024-12-09 04:09:35.184760] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:06.627 [2024-12-09 04:09:35.185014] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.889 malloc0 00:19:06.889 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:06.889 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=260093 00:19:06.889 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:06.889 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 260093 /var/tmp/bdevperf.sock 00:19:06.889 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 260093 ']' 00:19:06.889 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:06.889 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.889 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:06.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:06.889 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.889 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:06.889 [2024-12-09 04:09:35.322660] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:19:06.889 [2024-12-09 04:09:35.322744] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid260093 ] 00:19:06.889 [2024-12-09 04:09:35.405939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:07.151 [2024-12-09 04:09:35.478967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.151 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.151 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:07.151 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.hqk 00:19:07.408 04:09:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:07.664 [2024-12-09 04:09:36.159920] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:07.664 TLSTESTn1 00:19:07.921 04:09:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:07.921 Running I/O for 10 seconds... 00:19:10.223 3296.00 IOPS, 12.88 MiB/s [2024-12-09T03:09:39.727Z] 3309.00 IOPS, 12.93 MiB/s [2024-12-09T03:09:40.656Z] 3312.33 IOPS, 12.94 MiB/s [2024-12-09T03:09:41.586Z] 3312.25 IOPS, 12.94 MiB/s [2024-12-09T03:09:42.519Z] 3314.60 IOPS, 12.95 MiB/s [2024-12-09T03:09:43.451Z] 3329.50 IOPS, 13.01 MiB/s [2024-12-09T03:09:44.383Z] 3319.29 IOPS, 12.97 MiB/s [2024-12-09T03:09:45.760Z] 3312.75 IOPS, 12.94 MiB/s [2024-12-09T03:09:46.692Z] 3324.78 IOPS, 12.99 MiB/s [2024-12-09T03:09:46.692Z] 3321.60 IOPS, 12.97 MiB/s 00:19:18.116 Latency(us) 00:19:18.116 [2024-12-09T03:09:46.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.116 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:18.116 Verification LBA range: start 0x0 length 0x2000 00:19:18.116 TLSTESTn1 : 10.03 3324.23 12.99 0.00 0.00 38429.65 10485.76 33981.63 00:19:18.116 [2024-12-09T03:09:46.692Z] =================================================================================================================== 00:19:18.116 [2024-12-09T03:09:46.692Z] Total : 3324.23 12.99 0.00 0.00 38429.65 10485.76 33981.63 00:19:18.116 { 00:19:18.116 "results": [ 00:19:18.116 { 00:19:18.116 "job": "TLSTESTn1", 00:19:18.116 "core_mask": "0x4", 00:19:18.116 "workload": "verify", 00:19:18.116 "status": "finished", 00:19:18.116 "verify_range": { 00:19:18.116 "start": 0, 00:19:18.116 "length": 8192 00:19:18.116 }, 00:19:18.116 "queue_depth": 128, 00:19:18.116 "io_size": 4096, 00:19:18.116 "runtime": 10.029991, 00:19:18.116 "iops": 3324.230300904557, 00:19:18.116 "mibps": 12.985274612908427, 00:19:18.116 "io_failed": 0, 00:19:18.116 "io_timeout": 0, 00:19:18.116 "avg_latency_us": 38429.65247311255, 00:19:18.116 "min_latency_us": 10485.76, 00:19:18.116 "max_latency_us": 33981.62962962963 00:19:18.116 } 00:19:18.116 ], 00:19:18.116 "core_count": 1 00:19:18.116 } 00:19:18.116 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:18.116 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:18.116 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:18.116 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:18.116 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:18.116 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:18.116 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:18.116 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:18.116 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:18.117 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:18.117 nvmf_trace.0 00:19:18.117 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:18.117 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 260093 00:19:18.117 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 260093 ']' 00:19:18.117 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 260093 00:19:18.117 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:18.117 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.117 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 260093 00:19:18.117 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:18.117 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:18.117 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 260093' 00:19:18.117 killing process with pid 260093 00:19:18.117 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 260093 00:19:18.117 Received shutdown signal, test time was about 10.000000 seconds 00:19:18.117 00:19:18.117 Latency(us) 00:19:18.117 [2024-12-09T03:09:46.693Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:18.117 [2024-12-09T03:09:46.693Z] =================================================================================================================== 00:19:18.117 [2024-12-09T03:09:46.693Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:18.117 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 260093 00:19:18.374 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:18.374 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:18.374 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:19:18.375 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:18.375 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:19:18.375 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:18.375 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:18.375 rmmod nvme_tcp 00:19:18.375 rmmod nvme_fabrics 00:19:18.375 rmmod nvme_keyring 00:19:18.375 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:18.375 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:19:18.375 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:19:18.375 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 259943 ']' 00:19:18.375 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 259943 00:19:18.375 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 259943 ']' 00:19:18.375 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 259943 00:19:18.375 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:18.375 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.375 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 259943 00:19:18.375 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:18.375 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:18.375 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 259943' 00:19:18.375 killing process with pid 259943 00:19:18.375 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 259943 00:19:18.375 04:09:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 259943 00:19:18.632 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:18.632 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:18.632 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:18.632 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:19:18.632 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:19:18.632 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:18.632 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:19:18.632 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:18.632 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:18.632 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:18.632 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:18.632 04:09:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.164 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:21.164 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.hqk 00:19:21.164 00:19:21.164 real 0m17.228s 00:19:21.164 user 0m23.332s 00:19:21.164 sys 0m5.102s 00:19:21.164 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.164 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:21.164 ************************************ 00:19:21.164 END TEST nvmf_fips 00:19:21.164 ************************************ 00:19:21.164 04:09:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:21.164 04:09:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:21.164 04:09:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.164 04:09:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:21.164 ************************************ 00:19:21.164 START TEST nvmf_control_msg_list 00:19:21.164 ************************************ 00:19:21.164 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:21.164 * Looking for test storage... 00:19:21.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:21.164 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:21.164 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:19:21.164 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:21.164 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:21.164 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:21.164 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:21.164 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:21.164 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:21.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.165 --rc genhtml_branch_coverage=1 00:19:21.165 --rc genhtml_function_coverage=1 00:19:21.165 --rc genhtml_legend=1 00:19:21.165 --rc geninfo_all_blocks=1 00:19:21.165 --rc geninfo_unexecuted_blocks=1 00:19:21.165 00:19:21.165 ' 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:21.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.165 --rc genhtml_branch_coverage=1 00:19:21.165 --rc genhtml_function_coverage=1 00:19:21.165 --rc genhtml_legend=1 00:19:21.165 --rc geninfo_all_blocks=1 00:19:21.165 --rc geninfo_unexecuted_blocks=1 00:19:21.165 00:19:21.165 ' 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:21.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.165 --rc genhtml_branch_coverage=1 00:19:21.165 --rc genhtml_function_coverage=1 00:19:21.165 --rc genhtml_legend=1 00:19:21.165 --rc geninfo_all_blocks=1 00:19:21.165 --rc geninfo_unexecuted_blocks=1 00:19:21.165 00:19:21.165 ' 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:21.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:21.165 --rc genhtml_branch_coverage=1 00:19:21.165 --rc genhtml_function_coverage=1 00:19:21.165 --rc genhtml_legend=1 00:19:21.165 --rc geninfo_all_blocks=1 00:19:21.165 --rc geninfo_unexecuted_blocks=1 00:19:21.165 00:19:21.165 ' 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:21.165 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:19:21.165 04:09:49 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:23.280 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:23.280 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:23.280 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:23.280 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:23.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:19:23.280 00:19:23.280 --- 10.0.0.2 ping statistics --- 00:19:23.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.280 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:23.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:19:23.280 00:19:23.280 --- 10.0.0.1 ping statistics --- 00:19:23.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.280 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=263365 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 263365 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 263365 ']' 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:23.280 [2024-12-09 04:09:51.552990] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:19:23.280 [2024-12-09 04:09:51.553088] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.280 [2024-12-09 04:09:51.623821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.280 [2024-12-09 04:09:51.676341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.280 [2024-12-09 04:09:51.676416] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.280 [2024-12-09 04:09:51.676438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.280 [2024-12-09 04:09:51.676448] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.280 [2024-12-09 04:09:51.676458] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.280 [2024-12-09 04:09:51.677039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:23.280 [2024-12-09 04:09:51.816471] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:23.280 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.281 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:23.281 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.281 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:23.281 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.281 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:23.591 Malloc0 00:19:23.591 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.591 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:23.591 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.591 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:23.591 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.591 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:23.591 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.591 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:23.591 [2024-12-09 04:09:51.856982] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.591 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.591 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=263387 00:19:23.591 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:23.591 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=263388 00:19:23.591 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:23.591 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=263389 00:19:23.591 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 263387 00:19:23.591 04:09:51 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:23.591 [2024-12-09 04:09:51.925464] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:23.591 [2024-12-09 04:09:51.935508] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:23.591 [2024-12-09 04:09:51.935718] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:24.718 Initializing NVMe Controllers 00:19:24.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:24.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:24.718 Initialization complete. Launching workers. 00:19:24.718 ======================================================== 00:19:24.718 Latency(us) 00:19:24.718 Device Information : IOPS MiB/s Average min max 00:19:24.718 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40885.33 40570.48 40964.76 00:19:24.718 ======================================================== 00:19:24.718 Total : 25.00 0.10 40885.33 40570.48 40964.76 00:19:24.718 00:19:24.718 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 263388 00:19:24.718 Initializing NVMe Controllers 00:19:24.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:24.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:24.718 Initialization complete. Launching workers. 00:19:24.718 ======================================================== 00:19:24.718 Latency(us) 00:19:24.718 Device Information : IOPS MiB/s Average min max 00:19:24.718 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6306.99 24.64 158.12 150.93 523.82 00:19:24.718 ======================================================== 00:19:24.718 Total : 6306.99 24.64 158.12 150.93 523.82 00:19:24.718 00:19:24.718 Initializing NVMe Controllers 00:19:24.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:24.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:24.718 Initialization complete. Launching workers. 00:19:24.718 ======================================================== 00:19:24.718 Latency(us) 00:19:24.718 Device Information : IOPS MiB/s Average min max 00:19:24.718 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40901.39 40836.96 40966.92 00:19:24.718 ======================================================== 00:19:24.718 Total : 25.00 0.10 40901.39 40836.96 40966.92 00:19:24.718 00:19:24.718 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 263389 00:19:24.718 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:24.718 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:24.718 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:24.718 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:19:24.718 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:24.718 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:19:24.718 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:24.718 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:24.718 rmmod nvme_tcp 00:19:24.976 rmmod nvme_fabrics 00:19:24.976 rmmod nvme_keyring 00:19:24.976 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:24.976 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:19:24.976 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:19:24.976 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 263365 ']' 00:19:24.976 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 263365 00:19:24.976 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 263365 ']' 00:19:24.976 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 263365 00:19:24.976 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:24.976 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.976 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 263365 00:19:24.976 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.976 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.976 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 263365' 00:19:24.976 killing process with pid 263365 00:19:24.976 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 263365 00:19:24.976 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 263365 00:19:25.235 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:25.235 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:25.235 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:25.235 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:19:25.235 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:19:25.235 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:25.235 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:19:25.235 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:25.235 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:25.235 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:25.235 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:25.235 04:09:53 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.135 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:27.135 00:19:27.135 real 0m6.471s 00:19:27.135 user 0m6.234s 00:19:27.135 sys 0m2.521s 00:19:27.135 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.135 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:27.135 ************************************ 00:19:27.135 END TEST nvmf_control_msg_list 00:19:27.135 ************************************ 00:19:27.135 04:09:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:27.135 04:09:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:27.135 04:09:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.135 04:09:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:27.393 ************************************ 00:19:27.393 START TEST nvmf_wait_for_buf 00:19:27.393 ************************************ 00:19:27.393 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:27.393 * Looking for test storage... 00:19:27.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:27.393 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:27.393 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:27.393 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:27.393 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:27.393 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:27.393 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:27.393 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:27.393 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:27.393 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:27.393 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:27.393 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:27.393 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:27.393 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:27.393 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:27.393 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:27.393 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:27.393 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:27.393 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:27.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.394 --rc genhtml_branch_coverage=1 00:19:27.394 --rc genhtml_function_coverage=1 00:19:27.394 --rc genhtml_legend=1 00:19:27.394 --rc geninfo_all_blocks=1 00:19:27.394 --rc geninfo_unexecuted_blocks=1 00:19:27.394 00:19:27.394 ' 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:27.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.394 --rc genhtml_branch_coverage=1 00:19:27.394 --rc genhtml_function_coverage=1 00:19:27.394 --rc genhtml_legend=1 00:19:27.394 --rc geninfo_all_blocks=1 00:19:27.394 --rc geninfo_unexecuted_blocks=1 00:19:27.394 00:19:27.394 ' 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:27.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.394 --rc genhtml_branch_coverage=1 00:19:27.394 --rc genhtml_function_coverage=1 00:19:27.394 --rc genhtml_legend=1 00:19:27.394 --rc geninfo_all_blocks=1 00:19:27.394 --rc geninfo_unexecuted_blocks=1 00:19:27.394 00:19:27.394 ' 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:27.394 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.394 --rc genhtml_branch_coverage=1 00:19:27.394 --rc genhtml_function_coverage=1 00:19:27.394 --rc genhtml_legend=1 00:19:27.394 --rc geninfo_all_blocks=1 00:19:27.394 --rc geninfo_unexecuted_blocks=1 00:19:27.394 00:19:27.394 ' 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:27.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:19:27.394 04:09:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:29.922 04:09:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:29.922 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:29.922 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:29.922 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:29.922 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:29.922 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:29.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:29.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:19:29.923 00:19:29.923 --- 10.0.0.2 ping statistics --- 00:19:29.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.923 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:29.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:29.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:19:29.923 00:19:29.923 --- 10.0.0.1 ping statistics --- 00:19:29.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:29.923 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=265598 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 265598 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 265598 ']' 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:29.923 [2024-12-09 04:09:58.253687] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:19:29.923 [2024-12-09 04:09:58.253759] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.923 [2024-12-09 04:09:58.325542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.923 [2024-12-09 04:09:58.382780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.923 [2024-12-09 04:09:58.382852] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.923 [2024-12-09 04:09:58.382866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.923 [2024-12-09 04:09:58.382877] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.923 [2024-12-09 04:09:58.382887] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.923 [2024-12-09 04:09:58.383507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:29.923 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.181 Malloc0 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.181 [2024-12-09 04:09:58.626151] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:30.181 [2024-12-09 04:09:58.650429] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.181 04:09:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:30.181 [2024-12-09 04:09:58.735408] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:32.079 Initializing NVMe Controllers 00:19:32.079 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:32.079 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:32.079 Initialization complete. Launching workers. 00:19:32.079 ======================================================== 00:19:32.079 Latency(us) 00:19:32.079 Device Information : IOPS MiB/s Average min max 00:19:32.079 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 38.86 4.86 107223.13 31912.62 191529.35 00:19:32.079 ======================================================== 00:19:32.079 Total : 38.86 4.86 107223.13 31912.62 191529.35 00:19:32.079 00:19:32.079 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:32.079 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:32.079 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.079 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:32.079 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.079 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=598 00:19:32.079 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 598 -eq 0 ]] 00:19:32.079 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:32.079 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:32.079 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:32.079 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:19:32.079 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:32.080 rmmod nvme_tcp 00:19:32.080 rmmod nvme_fabrics 00:19:32.080 rmmod nvme_keyring 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 265598 ']' 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 265598 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 265598 ']' 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 265598 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 265598 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 265598' 00:19:32.080 killing process with pid 265598 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 265598 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 265598 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:32.080 04:10:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.634 04:10:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:34.634 00:19:34.634 real 0m6.955s 00:19:34.634 user 0m3.289s 00:19:34.634 sys 0m2.088s 00:19:34.634 04:10:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.634 04:10:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:34.634 ************************************ 00:19:34.634 END TEST nvmf_wait_for_buf 00:19:34.634 ************************************ 00:19:34.634 04:10:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:34.634 04:10:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:34.634 04:10:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:34.634 04:10:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:34.634 04:10:02 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:19:34.634 04:10:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:36.539 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:36.539 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:36.539 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:36.539 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:36.539 04:10:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:36.540 ************************************ 00:19:36.540 START TEST nvmf_perf_adq 00:19:36.540 ************************************ 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:36.540 * Looking for test storage... 00:19:36.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:36.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.540 --rc genhtml_branch_coverage=1 00:19:36.540 --rc genhtml_function_coverage=1 00:19:36.540 --rc genhtml_legend=1 00:19:36.540 --rc geninfo_all_blocks=1 00:19:36.540 --rc geninfo_unexecuted_blocks=1 00:19:36.540 00:19:36.540 ' 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:36.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.540 --rc genhtml_branch_coverage=1 00:19:36.540 --rc genhtml_function_coverage=1 00:19:36.540 --rc genhtml_legend=1 00:19:36.540 --rc geninfo_all_blocks=1 00:19:36.540 --rc geninfo_unexecuted_blocks=1 00:19:36.540 00:19:36.540 ' 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:36.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.540 --rc genhtml_branch_coverage=1 00:19:36.540 --rc genhtml_function_coverage=1 00:19:36.540 --rc genhtml_legend=1 00:19:36.540 --rc geninfo_all_blocks=1 00:19:36.540 --rc geninfo_unexecuted_blocks=1 00:19:36.540 00:19:36.540 ' 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:36.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:36.540 --rc genhtml_branch_coverage=1 00:19:36.540 --rc genhtml_function_coverage=1 00:19:36.540 --rc genhtml_legend=1 00:19:36.540 --rc geninfo_all_blocks=1 00:19:36.540 --rc geninfo_unexecuted_blocks=1 00:19:36.540 00:19:36.540 ' 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:36.540 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:36.541 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.541 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.541 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.541 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:36.541 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.541 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:19:36.541 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:36.541 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:36.541 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:36.541 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:36.541 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:36.541 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:36.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:36.541 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:36.541 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:36.541 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:36.541 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:36.541 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:36.541 04:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:38.442 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:38.442 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:38.442 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:38.442 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:38.442 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:38.443 04:10:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:39.377 04:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:43.567 04:10:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:47.778 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:19:47.778 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:47.778 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.778 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:47.778 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:47.778 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:47.778 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.778 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.778 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.778 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:47.778 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:47.778 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:19:47.778 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.778 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:47.778 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:19:47.778 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:47.778 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:47.778 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:47.778 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:47.778 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:47.779 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:47.779 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:47.779 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:47.779 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:47.779 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:48.038 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:48.038 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.406 ms 00:19:48.038 00:19:48.038 --- 10.0.0.2 ping statistics --- 00:19:48.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.038 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:48.038 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:48.038 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:19:48.038 00:19:48.038 --- 10.0.0.1 ping statistics --- 00:19:48.038 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:48.038 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=270568 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 270568 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 270568 ']' 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.038 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.038 [2024-12-09 04:10:16.486551] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:19:48.038 [2024-12-09 04:10:16.486654] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.038 [2024-12-09 04:10:16.559601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:48.295 [2024-12-09 04:10:16.619674] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.295 [2024-12-09 04:10:16.619725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.295 [2024-12-09 04:10:16.619748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.295 [2024-12-09 04:10:16.619760] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.295 [2024-12-09 04:10:16.619770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.295 [2024-12-09 04:10:16.621361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.295 [2024-12-09 04:10:16.621387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:48.295 [2024-12-09 04:10:16.621411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:48.295 [2024-12-09 04:10:16.621414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:48.295 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.295 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:48.295 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:48.295 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:48.295 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.295 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:48.295 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:19:48.295 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:48.295 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.295 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:48.295 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.295 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.295 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:48.295 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:48.295 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.295 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.295 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.295 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:48.295 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.295 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.568 [2024-12-09 04:10:16.882769] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.568 Malloc1 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:48.568 [2024-12-09 04:10:16.941008] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=270599 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:48.568 04:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:19:50.463 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:19:50.463 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.463 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:50.463 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.463 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:19:50.463 "tick_rate": 2700000000, 00:19:50.463 "poll_groups": [ 00:19:50.463 { 00:19:50.463 "name": "nvmf_tgt_poll_group_000", 00:19:50.463 "admin_qpairs": 1, 00:19:50.463 "io_qpairs": 1, 00:19:50.463 "current_admin_qpairs": 1, 00:19:50.463 "current_io_qpairs": 1, 00:19:50.463 "pending_bdev_io": 0, 00:19:50.463 "completed_nvme_io": 18468, 00:19:50.463 "transports": [ 00:19:50.463 { 00:19:50.463 "trtype": "TCP" 00:19:50.463 } 00:19:50.463 ] 00:19:50.463 }, 00:19:50.463 { 00:19:50.463 "name": "nvmf_tgt_poll_group_001", 00:19:50.463 "admin_qpairs": 0, 00:19:50.463 "io_qpairs": 1, 00:19:50.463 "current_admin_qpairs": 0, 00:19:50.463 "current_io_qpairs": 1, 00:19:50.463 "pending_bdev_io": 0, 00:19:50.463 "completed_nvme_io": 19912, 00:19:50.463 "transports": [ 00:19:50.463 { 00:19:50.463 "trtype": "TCP" 00:19:50.463 } 00:19:50.463 ] 00:19:50.463 }, 00:19:50.463 { 00:19:50.463 "name": "nvmf_tgt_poll_group_002", 00:19:50.463 "admin_qpairs": 0, 00:19:50.463 "io_qpairs": 1, 00:19:50.463 "current_admin_qpairs": 0, 00:19:50.463 "current_io_qpairs": 1, 00:19:50.463 "pending_bdev_io": 0, 00:19:50.463 "completed_nvme_io": 19982, 00:19:50.463 "transports": [ 00:19:50.463 { 00:19:50.463 "trtype": "TCP" 00:19:50.463 } 00:19:50.463 ] 00:19:50.463 }, 00:19:50.463 { 00:19:50.463 "name": "nvmf_tgt_poll_group_003", 00:19:50.463 "admin_qpairs": 0, 00:19:50.463 "io_qpairs": 1, 00:19:50.463 "current_admin_qpairs": 0, 00:19:50.463 "current_io_qpairs": 1, 00:19:50.463 "pending_bdev_io": 0, 00:19:50.463 "completed_nvme_io": 19652, 00:19:50.463 "transports": [ 00:19:50.463 { 00:19:50.463 "trtype": "TCP" 00:19:50.463 } 00:19:50.463 ] 00:19:50.463 } 00:19:50.463 ] 00:19:50.463 }' 00:19:50.463 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:50.463 04:10:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:19:50.463 04:10:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:19:50.463 04:10:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:19:50.463 04:10:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 270599 00:19:58.571 Initializing NVMe Controllers 00:19:58.571 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:58.571 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:58.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:58.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:58.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:58.572 Initialization complete. Launching workers. 00:19:58.572 ======================================================== 00:19:58.572 Latency(us) 00:19:58.572 Device Information : IOPS MiB/s Average min max 00:19:58.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10368.30 40.50 6172.58 1972.76 10503.29 00:19:58.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10514.70 41.07 6088.28 2453.35 10583.66 00:19:58.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10552.20 41.22 6065.59 2391.77 9592.99 00:19:58.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 9713.00 37.94 6589.16 2639.28 11347.30 00:19:58.572 ======================================================== 00:19:58.572 Total : 41148.18 160.74 6221.94 1972.76 11347.30 00:19:58.572 00:19:58.572 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:19:58.572 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:58.572 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:19:58.572 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:58.572 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:19:58.572 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:58.572 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:58.572 rmmod nvme_tcp 00:19:58.572 rmmod nvme_fabrics 00:19:58.572 rmmod nvme_keyring 00:19:58.572 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:58.572 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:19:58.572 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:19:58.572 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 270568 ']' 00:19:58.572 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 270568 00:19:58.572 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 270568 ']' 00:19:58.572 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 270568 00:19:58.572 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:19:58.572 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:58.572 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 270568 00:19:58.829 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:58.829 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:58.829 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 270568' 00:19:58.829 killing process with pid 270568 00:19:58.829 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 270568 00:19:58.829 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 270568 00:19:59.087 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:59.087 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:59.087 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:59.087 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:19:59.087 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:59.087 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:19:59.087 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:19:59.087 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:59.087 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:59.088 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.088 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:59.088 04:10:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.989 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:00.989 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:00.989 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:00.989 04:10:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:01.925 04:10:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:04.455 04:10:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:09.728 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:09.729 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:09.729 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:09.729 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:09.729 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:09.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:20:09.729 00:20:09.729 --- 10.0.0.2 ping statistics --- 00:20:09.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.729 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:09.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:20:09.729 00:20:09.729 --- 10.0.0.1 ping statistics --- 00:20:09.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.729 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:09.729 net.core.busy_poll = 1 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:09.729 net.core.busy_read = 1 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:20:09.729 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:09.730 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:20:09.730 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:09.730 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:09.730 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:09.730 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:09.730 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=273228 00:20:09.730 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:09.730 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 273228 00:20:09.730 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 273228 ']' 00:20:09.730 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.730 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.730 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.730 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.730 04:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:09.730 [2024-12-09 04:10:37.901038] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:20:09.730 [2024-12-09 04:10:37.901119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.730 [2024-12-09 04:10:37.976837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:09.730 [2024-12-09 04:10:38.033831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.730 [2024-12-09 04:10:38.033887] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.730 [2024-12-09 04:10:38.033910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.730 [2024-12-09 04:10:38.033921] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.730 [2024-12-09 04:10:38.033931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.730 [2024-12-09 04:10:38.035358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.730 [2024-12-09 04:10:38.035423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:09.730 [2024-12-09 04:10:38.035485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:09.730 [2024-12-09 04:10:38.035489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.730 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.730 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:09.730 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:09.730 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:09.730 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:09.730 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.730 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:09.730 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:09.730 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:09.730 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.730 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:09.730 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.730 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:09.730 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:09.730 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.730 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:09.730 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.730 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:09.730 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.730 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:09.988 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.988 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:09.988 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.988 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:09.988 [2024-12-09 04:10:38.311085] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.988 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.988 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:09.988 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.988 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:09.989 Malloc1 00:20:09.989 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.989 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:09.989 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.989 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:09.989 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.989 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:09.989 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.989 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:09.989 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.989 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:09.989 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.989 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:09.989 [2024-12-09 04:10:38.377474] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.989 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.989 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=273372 00:20:09.989 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:09.989 04:10:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:11.889 04:10:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:11.889 04:10:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.889 04:10:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:11.889 04:10:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.889 04:10:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:11.889 "tick_rate": 2700000000, 00:20:11.889 "poll_groups": [ 00:20:11.889 { 00:20:11.889 "name": "nvmf_tgt_poll_group_000", 00:20:11.889 "admin_qpairs": 1, 00:20:11.889 "io_qpairs": 1, 00:20:11.889 "current_admin_qpairs": 1, 00:20:11.889 "current_io_qpairs": 1, 00:20:11.889 "pending_bdev_io": 0, 00:20:11.889 "completed_nvme_io": 25025, 00:20:11.889 "transports": [ 00:20:11.889 { 00:20:11.889 "trtype": "TCP" 00:20:11.889 } 00:20:11.889 ] 00:20:11.889 }, 00:20:11.889 { 00:20:11.889 "name": "nvmf_tgt_poll_group_001", 00:20:11.889 "admin_qpairs": 0, 00:20:11.889 "io_qpairs": 3, 00:20:11.889 "current_admin_qpairs": 0, 00:20:11.889 "current_io_qpairs": 3, 00:20:11.889 "pending_bdev_io": 0, 00:20:11.889 "completed_nvme_io": 24837, 00:20:11.889 "transports": [ 00:20:11.889 { 00:20:11.889 "trtype": "TCP" 00:20:11.889 } 00:20:11.889 ] 00:20:11.889 }, 00:20:11.889 { 00:20:11.889 "name": "nvmf_tgt_poll_group_002", 00:20:11.889 "admin_qpairs": 0, 00:20:11.889 "io_qpairs": 0, 00:20:11.889 "current_admin_qpairs": 0, 00:20:11.889 "current_io_qpairs": 0, 00:20:11.889 "pending_bdev_io": 0, 00:20:11.889 "completed_nvme_io": 0, 00:20:11.889 "transports": [ 00:20:11.889 { 00:20:11.889 "trtype": "TCP" 00:20:11.889 } 00:20:11.889 ] 00:20:11.889 }, 00:20:11.889 { 00:20:11.889 "name": "nvmf_tgt_poll_group_003", 00:20:11.889 "admin_qpairs": 0, 00:20:11.889 "io_qpairs": 0, 00:20:11.889 "current_admin_qpairs": 0, 00:20:11.889 "current_io_qpairs": 0, 00:20:11.889 "pending_bdev_io": 0, 00:20:11.889 "completed_nvme_io": 0, 00:20:11.889 "transports": [ 00:20:11.889 { 00:20:11.889 "trtype": "TCP" 00:20:11.889 } 00:20:11.889 ] 00:20:11.889 } 00:20:11.889 ] 00:20:11.889 }' 00:20:11.889 04:10:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:11.889 04:10:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:11.889 04:10:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:11.889 04:10:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:11.889 04:10:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 273372 00:20:19.999 Initializing NVMe Controllers 00:20:19.999 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:19.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:19.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:19.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:19.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:19.999 Initialization complete. Launching workers. 00:20:19.999 ======================================================== 00:20:19.999 Latency(us) 00:20:19.999 Device Information : IOPS MiB/s Average min max 00:20:19.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4294.90 16.78 14915.66 1706.64 61750.68 00:20:19.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4388.80 17.14 14596.14 2585.73 62153.85 00:20:19.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4590.00 17.93 13955.61 1687.12 61510.76 00:20:19.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13405.00 52.36 4774.37 2920.53 45613.32 00:20:19.999 ======================================================== 00:20:19.999 Total : 26678.69 104.21 9602.32 1687.12 62153.85 00:20:19.999 00:20:19.999 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:19.999 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:19.999 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:20:19.999 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:19.999 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:20:19.999 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:19.999 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:19.999 rmmod nvme_tcp 00:20:19.999 rmmod nvme_fabrics 00:20:19.999 rmmod nvme_keyring 00:20:20.256 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:20.256 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:20:20.256 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:20:20.256 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 273228 ']' 00:20:20.256 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 273228 00:20:20.256 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 273228 ']' 00:20:20.256 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 273228 00:20:20.256 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:20.256 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.256 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 273228 00:20:20.256 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:20.256 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:20.256 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 273228' 00:20:20.256 killing process with pid 273228 00:20:20.256 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 273228 00:20:20.256 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 273228 00:20:20.515 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:20.515 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:20.515 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:20.515 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:20:20.515 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:20:20.515 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:20:20.515 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:20.515 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:20.515 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:20.515 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:20.515 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:20.515 04:10:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.809 04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:23.809 04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:23.809 00:20:23.809 real 0m47.189s 00:20:23.809 user 2m39.663s 00:20:23.809 sys 0m10.695s 00:20:23.809 04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:23.809 04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:23.809 ************************************ 00:20:23.809 END TEST nvmf_perf_adq 00:20:23.809 ************************************ 00:20:23.809 04:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:23.809 04:10:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:23.809 04:10:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:23.809 04:10:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:23.809 ************************************ 00:20:23.809 START TEST nvmf_shutdown 00:20:23.809 ************************************ 00:20:23.809 04:10:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:23.809 * Looking for test storage... 00:20:23.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:23.809 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:23.809 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:20:23.809 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:23.809 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:23.809 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:23.809 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:23.809 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:23.809 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:23.809 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:23.809 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:23.809 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:23.809 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:23.809 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:23.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.810 --rc genhtml_branch_coverage=1 00:20:23.810 --rc genhtml_function_coverage=1 00:20:23.810 --rc genhtml_legend=1 00:20:23.810 --rc geninfo_all_blocks=1 00:20:23.810 --rc geninfo_unexecuted_blocks=1 00:20:23.810 00:20:23.810 ' 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:23.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.810 --rc genhtml_branch_coverage=1 00:20:23.810 --rc genhtml_function_coverage=1 00:20:23.810 --rc genhtml_legend=1 00:20:23.810 --rc geninfo_all_blocks=1 00:20:23.810 --rc geninfo_unexecuted_blocks=1 00:20:23.810 00:20:23.810 ' 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:23.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.810 --rc genhtml_branch_coverage=1 00:20:23.810 --rc genhtml_function_coverage=1 00:20:23.810 --rc genhtml_legend=1 00:20:23.810 --rc geninfo_all_blocks=1 00:20:23.810 --rc geninfo_unexecuted_blocks=1 00:20:23.810 00:20:23.810 ' 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:23.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:23.810 --rc genhtml_branch_coverage=1 00:20:23.810 --rc genhtml_function_coverage=1 00:20:23.810 --rc genhtml_legend=1 00:20:23.810 --rc geninfo_all_blocks=1 00:20:23.810 --rc geninfo_unexecuted_blocks=1 00:20:23.810 00:20:23.810 ' 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:23.810 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:23.810 ************************************ 00:20:23.810 START TEST nvmf_shutdown_tc1 00:20:23.810 ************************************ 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:23.810 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:23.811 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:23.811 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:23.811 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:23.811 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:23.811 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:23.811 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:23.811 04:10:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:25.712 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:25.712 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:25.712 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:25.712 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:25.713 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:25.713 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:25.713 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:25.713 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:25.713 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:25.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:25.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:20:25.714 00:20:25.714 --- 10.0.0.2 ping statistics --- 00:20:25.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.714 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:25.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:25.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:20:25.714 00:20:25.714 --- 10.0.0.1 ping statistics --- 00:20:25.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:25.714 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=276686 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 276686 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 276686 ']' 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:25.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:25.714 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:25.971 [2024-12-09 04:10:54.334976] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:20:25.971 [2024-12-09 04:10:54.335051] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:25.971 [2024-12-09 04:10:54.407444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:25.971 [2024-12-09 04:10:54.463112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:25.971 [2024-12-09 04:10:54.463173] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:25.971 [2024-12-09 04:10:54.463193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:25.971 [2024-12-09 04:10:54.463204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:25.971 [2024-12-09 04:10:54.463213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:25.971 [2024-12-09 04:10:54.464796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:25.971 [2024-12-09 04:10:54.464861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:25.971 [2024-12-09 04:10:54.464976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:25.971 [2024-12-09 04:10:54.464980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:26.229 [2024-12-09 04:10:54.614608] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.229 04:10:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:26.229 Malloc1 00:20:26.229 [2024-12-09 04:10:54.718532] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.229 Malloc2 00:20:26.229 Malloc3 00:20:26.487 Malloc4 00:20:26.487 Malloc5 00:20:26.487 Malloc6 00:20:26.487 Malloc7 00:20:26.487 Malloc8 00:20:26.744 Malloc9 00:20:26.744 Malloc10 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=276861 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 276861 /var/tmp/bdevperf.sock 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 276861 ']' 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:26.744 { 00:20:26.744 "params": { 00:20:26.744 "name": "Nvme$subsystem", 00:20:26.744 "trtype": "$TEST_TRANSPORT", 00:20:26.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.744 "adrfam": "ipv4", 00:20:26.744 "trsvcid": "$NVMF_PORT", 00:20:26.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.744 "hdgst": ${hdgst:-false}, 00:20:26.744 "ddgst": ${ddgst:-false} 00:20:26.744 }, 00:20:26.744 "method": "bdev_nvme_attach_controller" 00:20:26.744 } 00:20:26.744 EOF 00:20:26.744 )") 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:26.744 { 00:20:26.744 "params": { 00:20:26.744 "name": "Nvme$subsystem", 00:20:26.744 "trtype": "$TEST_TRANSPORT", 00:20:26.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.744 "adrfam": "ipv4", 00:20:26.744 "trsvcid": "$NVMF_PORT", 00:20:26.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.744 "hdgst": ${hdgst:-false}, 00:20:26.744 "ddgst": ${ddgst:-false} 00:20:26.744 }, 00:20:26.744 "method": "bdev_nvme_attach_controller" 00:20:26.744 } 00:20:26.744 EOF 00:20:26.744 )") 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:26.744 { 00:20:26.744 "params": { 00:20:26.744 "name": "Nvme$subsystem", 00:20:26.744 "trtype": "$TEST_TRANSPORT", 00:20:26.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.744 "adrfam": "ipv4", 00:20:26.744 "trsvcid": "$NVMF_PORT", 00:20:26.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.744 "hdgst": ${hdgst:-false}, 00:20:26.744 "ddgst": ${ddgst:-false} 00:20:26.744 }, 00:20:26.744 "method": "bdev_nvme_attach_controller" 00:20:26.744 } 00:20:26.744 EOF 00:20:26.744 )") 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:26.744 { 00:20:26.744 "params": { 00:20:26.744 "name": "Nvme$subsystem", 00:20:26.744 "trtype": "$TEST_TRANSPORT", 00:20:26.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.744 "adrfam": "ipv4", 00:20:26.744 "trsvcid": "$NVMF_PORT", 00:20:26.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.744 "hdgst": ${hdgst:-false}, 00:20:26.744 "ddgst": ${ddgst:-false} 00:20:26.744 }, 00:20:26.744 "method": "bdev_nvme_attach_controller" 00:20:26.744 } 00:20:26.744 EOF 00:20:26.744 )") 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:26.744 { 00:20:26.744 "params": { 00:20:26.744 "name": "Nvme$subsystem", 00:20:26.744 "trtype": "$TEST_TRANSPORT", 00:20:26.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.744 "adrfam": "ipv4", 00:20:26.744 "trsvcid": "$NVMF_PORT", 00:20:26.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.744 "hdgst": ${hdgst:-false}, 00:20:26.744 "ddgst": ${ddgst:-false} 00:20:26.744 }, 00:20:26.744 "method": "bdev_nvme_attach_controller" 00:20:26.744 } 00:20:26.744 EOF 00:20:26.744 )") 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:26.744 { 00:20:26.744 "params": { 00:20:26.744 "name": "Nvme$subsystem", 00:20:26.744 "trtype": "$TEST_TRANSPORT", 00:20:26.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.744 "adrfam": "ipv4", 00:20:26.744 "trsvcid": "$NVMF_PORT", 00:20:26.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.744 "hdgst": ${hdgst:-false}, 00:20:26.744 "ddgst": ${ddgst:-false} 00:20:26.744 }, 00:20:26.744 "method": "bdev_nvme_attach_controller" 00:20:26.744 } 00:20:26.744 EOF 00:20:26.744 )") 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:26.744 { 00:20:26.744 "params": { 00:20:26.744 "name": "Nvme$subsystem", 00:20:26.744 "trtype": "$TEST_TRANSPORT", 00:20:26.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.744 "adrfam": "ipv4", 00:20:26.744 "trsvcid": "$NVMF_PORT", 00:20:26.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.744 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.744 "hdgst": ${hdgst:-false}, 00:20:26.744 "ddgst": ${ddgst:-false} 00:20:26.744 }, 00:20:26.744 "method": "bdev_nvme_attach_controller" 00:20:26.744 } 00:20:26.744 EOF 00:20:26.744 )") 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:26.744 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:26.744 { 00:20:26.744 "params": { 00:20:26.744 "name": "Nvme$subsystem", 00:20:26.744 "trtype": "$TEST_TRANSPORT", 00:20:26.744 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.744 "adrfam": "ipv4", 00:20:26.744 "trsvcid": "$NVMF_PORT", 00:20:26.744 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.745 "hdgst": ${hdgst:-false}, 00:20:26.745 "ddgst": ${ddgst:-false} 00:20:26.745 }, 00:20:26.745 "method": "bdev_nvme_attach_controller" 00:20:26.745 } 00:20:26.745 EOF 00:20:26.745 )") 00:20:26.745 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:26.745 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:26.745 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:26.745 { 00:20:26.745 "params": { 00:20:26.745 "name": "Nvme$subsystem", 00:20:26.745 "trtype": "$TEST_TRANSPORT", 00:20:26.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.745 "adrfam": "ipv4", 00:20:26.745 "trsvcid": "$NVMF_PORT", 00:20:26.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.745 "hdgst": ${hdgst:-false}, 00:20:26.745 "ddgst": ${ddgst:-false} 00:20:26.745 }, 00:20:26.745 "method": "bdev_nvme_attach_controller" 00:20:26.745 } 00:20:26.745 EOF 00:20:26.745 )") 00:20:26.745 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:26.745 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:26.745 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:26.745 { 00:20:26.745 "params": { 00:20:26.745 "name": "Nvme$subsystem", 00:20:26.745 "trtype": "$TEST_TRANSPORT", 00:20:26.745 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:26.745 "adrfam": "ipv4", 00:20:26.745 "trsvcid": "$NVMF_PORT", 00:20:26.745 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:26.745 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:26.745 "hdgst": ${hdgst:-false}, 00:20:26.745 "ddgst": ${ddgst:-false} 00:20:26.745 }, 00:20:26.745 "method": "bdev_nvme_attach_controller" 00:20:26.745 } 00:20:26.745 EOF 00:20:26.745 )") 00:20:26.745 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:26.745 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:26.745 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:26.745 04:10:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:26.745 "params": { 00:20:26.745 "name": "Nvme1", 00:20:26.745 "trtype": "tcp", 00:20:26.745 "traddr": "10.0.0.2", 00:20:26.745 "adrfam": "ipv4", 00:20:26.745 "trsvcid": "4420", 00:20:26.745 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.745 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:26.745 "hdgst": false, 00:20:26.745 "ddgst": false 00:20:26.745 }, 00:20:26.745 "method": "bdev_nvme_attach_controller" 00:20:26.745 },{ 00:20:26.745 "params": { 00:20:26.745 "name": "Nvme2", 00:20:26.745 "trtype": "tcp", 00:20:26.745 "traddr": "10.0.0.2", 00:20:26.745 "adrfam": "ipv4", 00:20:26.745 "trsvcid": "4420", 00:20:26.745 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:26.745 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:26.745 "hdgst": false, 00:20:26.745 "ddgst": false 00:20:26.745 }, 00:20:26.745 "method": "bdev_nvme_attach_controller" 00:20:26.745 },{ 00:20:26.745 "params": { 00:20:26.745 "name": "Nvme3", 00:20:26.745 "trtype": "tcp", 00:20:26.745 "traddr": "10.0.0.2", 00:20:26.745 "adrfam": "ipv4", 00:20:26.745 "trsvcid": "4420", 00:20:26.745 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:26.745 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:26.745 "hdgst": false, 00:20:26.745 "ddgst": false 00:20:26.745 }, 00:20:26.745 "method": "bdev_nvme_attach_controller" 00:20:26.745 },{ 00:20:26.745 "params": { 00:20:26.745 "name": "Nvme4", 00:20:26.745 "trtype": "tcp", 00:20:26.745 "traddr": "10.0.0.2", 00:20:26.745 "adrfam": "ipv4", 00:20:26.745 "trsvcid": "4420", 00:20:26.745 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:26.745 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:26.745 "hdgst": false, 00:20:26.745 "ddgst": false 00:20:26.745 }, 00:20:26.745 "method": "bdev_nvme_attach_controller" 00:20:26.745 },{ 00:20:26.745 "params": { 00:20:26.745 "name": "Nvme5", 00:20:26.745 "trtype": "tcp", 00:20:26.745 "traddr": "10.0.0.2", 00:20:26.745 "adrfam": "ipv4", 00:20:26.745 "trsvcid": "4420", 00:20:26.745 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:26.745 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:26.745 "hdgst": false, 00:20:26.745 "ddgst": false 00:20:26.745 }, 00:20:26.745 "method": "bdev_nvme_attach_controller" 00:20:26.745 },{ 00:20:26.745 "params": { 00:20:26.745 "name": "Nvme6", 00:20:26.745 "trtype": "tcp", 00:20:26.745 "traddr": "10.0.0.2", 00:20:26.745 "adrfam": "ipv4", 00:20:26.745 "trsvcid": "4420", 00:20:26.745 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:26.745 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:26.745 "hdgst": false, 00:20:26.745 "ddgst": false 00:20:26.745 }, 00:20:26.745 "method": "bdev_nvme_attach_controller" 00:20:26.745 },{ 00:20:26.745 "params": { 00:20:26.745 "name": "Nvme7", 00:20:26.745 "trtype": "tcp", 00:20:26.745 "traddr": "10.0.0.2", 00:20:26.745 "adrfam": "ipv4", 00:20:26.745 "trsvcid": "4420", 00:20:26.745 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:26.745 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:26.745 "hdgst": false, 00:20:26.745 "ddgst": false 00:20:26.745 }, 00:20:26.745 "method": "bdev_nvme_attach_controller" 00:20:26.745 },{ 00:20:26.745 "params": { 00:20:26.745 "name": "Nvme8", 00:20:26.745 "trtype": "tcp", 00:20:26.745 "traddr": "10.0.0.2", 00:20:26.745 "adrfam": "ipv4", 00:20:26.745 "trsvcid": "4420", 00:20:26.745 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:26.745 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:26.745 "hdgst": false, 00:20:26.745 "ddgst": false 00:20:26.745 }, 00:20:26.745 "method": "bdev_nvme_attach_controller" 00:20:26.745 },{ 00:20:26.745 "params": { 00:20:26.745 "name": "Nvme9", 00:20:26.745 "trtype": "tcp", 00:20:26.745 "traddr": "10.0.0.2", 00:20:26.745 "adrfam": "ipv4", 00:20:26.745 "trsvcid": "4420", 00:20:26.745 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:26.745 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:26.745 "hdgst": false, 00:20:26.745 "ddgst": false 00:20:26.745 }, 00:20:26.745 "method": "bdev_nvme_attach_controller" 00:20:26.745 },{ 00:20:26.745 "params": { 00:20:26.745 "name": "Nvme10", 00:20:26.745 "trtype": "tcp", 00:20:26.745 "traddr": "10.0.0.2", 00:20:26.745 "adrfam": "ipv4", 00:20:26.745 "trsvcid": "4420", 00:20:26.745 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:26.745 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:26.745 "hdgst": false, 00:20:26.745 "ddgst": false 00:20:26.745 }, 00:20:26.745 "method": "bdev_nvme_attach_controller" 00:20:26.745 }' 00:20:26.745 [2024-12-09 04:10:55.244734] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:20:26.745 [2024-12-09 04:10:55.244809] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:26.745 [2024-12-09 04:10:55.317368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:27.002 [2024-12-09 04:10:55.376694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.897 04:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.897 04:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:28.897 04:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:28.897 04:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.897 04:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:28.897 04:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.897 04:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 276861 00:20:28.897 04:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:28.897 04:10:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:29.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 276861 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:29.829 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 276686 00:20:29.829 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.830 { 00:20:29.830 "params": { 00:20:29.830 "name": "Nvme$subsystem", 00:20:29.830 "trtype": "$TEST_TRANSPORT", 00:20:29.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.830 "adrfam": "ipv4", 00:20:29.830 "trsvcid": "$NVMF_PORT", 00:20:29.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.830 "hdgst": ${hdgst:-false}, 00:20:29.830 "ddgst": ${ddgst:-false} 00:20:29.830 }, 00:20:29.830 "method": "bdev_nvme_attach_controller" 00:20:29.830 } 00:20:29.830 EOF 00:20:29.830 )") 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.830 { 00:20:29.830 "params": { 00:20:29.830 "name": "Nvme$subsystem", 00:20:29.830 "trtype": "$TEST_TRANSPORT", 00:20:29.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.830 "adrfam": "ipv4", 00:20:29.830 "trsvcid": "$NVMF_PORT", 00:20:29.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.830 "hdgst": ${hdgst:-false}, 00:20:29.830 "ddgst": ${ddgst:-false} 00:20:29.830 }, 00:20:29.830 "method": "bdev_nvme_attach_controller" 00:20:29.830 } 00:20:29.830 EOF 00:20:29.830 )") 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.830 { 00:20:29.830 "params": { 00:20:29.830 "name": "Nvme$subsystem", 00:20:29.830 "trtype": "$TEST_TRANSPORT", 00:20:29.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.830 "adrfam": "ipv4", 00:20:29.830 "trsvcid": "$NVMF_PORT", 00:20:29.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.830 "hdgst": ${hdgst:-false}, 00:20:29.830 "ddgst": ${ddgst:-false} 00:20:29.830 }, 00:20:29.830 "method": "bdev_nvme_attach_controller" 00:20:29.830 } 00:20:29.830 EOF 00:20:29.830 )") 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.830 { 00:20:29.830 "params": { 00:20:29.830 "name": "Nvme$subsystem", 00:20:29.830 "trtype": "$TEST_TRANSPORT", 00:20:29.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.830 "adrfam": "ipv4", 00:20:29.830 "trsvcid": "$NVMF_PORT", 00:20:29.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.830 "hdgst": ${hdgst:-false}, 00:20:29.830 "ddgst": ${ddgst:-false} 00:20:29.830 }, 00:20:29.830 "method": "bdev_nvme_attach_controller" 00:20:29.830 } 00:20:29.830 EOF 00:20:29.830 )") 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.830 { 00:20:29.830 "params": { 00:20:29.830 "name": "Nvme$subsystem", 00:20:29.830 "trtype": "$TEST_TRANSPORT", 00:20:29.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.830 "adrfam": "ipv4", 00:20:29.830 "trsvcid": "$NVMF_PORT", 00:20:29.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.830 "hdgst": ${hdgst:-false}, 00:20:29.830 "ddgst": ${ddgst:-false} 00:20:29.830 }, 00:20:29.830 "method": "bdev_nvme_attach_controller" 00:20:29.830 } 00:20:29.830 EOF 00:20:29.830 )") 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.830 { 00:20:29.830 "params": { 00:20:29.830 "name": "Nvme$subsystem", 00:20:29.830 "trtype": "$TEST_TRANSPORT", 00:20:29.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.830 "adrfam": "ipv4", 00:20:29.830 "trsvcid": "$NVMF_PORT", 00:20:29.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.830 "hdgst": ${hdgst:-false}, 00:20:29.830 "ddgst": ${ddgst:-false} 00:20:29.830 }, 00:20:29.830 "method": "bdev_nvme_attach_controller" 00:20:29.830 } 00:20:29.830 EOF 00:20:29.830 )") 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.830 { 00:20:29.830 "params": { 00:20:29.830 "name": "Nvme$subsystem", 00:20:29.830 "trtype": "$TEST_TRANSPORT", 00:20:29.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.830 "adrfam": "ipv4", 00:20:29.830 "trsvcid": "$NVMF_PORT", 00:20:29.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.830 "hdgst": ${hdgst:-false}, 00:20:29.830 "ddgst": ${ddgst:-false} 00:20:29.830 }, 00:20:29.830 "method": "bdev_nvme_attach_controller" 00:20:29.830 } 00:20:29.830 EOF 00:20:29.830 )") 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.830 { 00:20:29.830 "params": { 00:20:29.830 "name": "Nvme$subsystem", 00:20:29.830 "trtype": "$TEST_TRANSPORT", 00:20:29.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.830 "adrfam": "ipv4", 00:20:29.830 "trsvcid": "$NVMF_PORT", 00:20:29.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.830 "hdgst": ${hdgst:-false}, 00:20:29.830 "ddgst": ${ddgst:-false} 00:20:29.830 }, 00:20:29.830 "method": "bdev_nvme_attach_controller" 00:20:29.830 } 00:20:29.830 EOF 00:20:29.830 )") 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.830 { 00:20:29.830 "params": { 00:20:29.830 "name": "Nvme$subsystem", 00:20:29.830 "trtype": "$TEST_TRANSPORT", 00:20:29.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.830 "adrfam": "ipv4", 00:20:29.830 "trsvcid": "$NVMF_PORT", 00:20:29.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.830 "hdgst": ${hdgst:-false}, 00:20:29.830 "ddgst": ${ddgst:-false} 00:20:29.830 }, 00:20:29.830 "method": "bdev_nvme_attach_controller" 00:20:29.830 } 00:20:29.830 EOF 00:20:29.830 )") 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:29.830 { 00:20:29.830 "params": { 00:20:29.830 "name": "Nvme$subsystem", 00:20:29.830 "trtype": "$TEST_TRANSPORT", 00:20:29.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.830 "adrfam": "ipv4", 00:20:29.830 "trsvcid": "$NVMF_PORT", 00:20:29.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.830 "hdgst": ${hdgst:-false}, 00:20:29.830 "ddgst": ${ddgst:-false} 00:20:29.830 }, 00:20:29.830 "method": "bdev_nvme_attach_controller" 00:20:29.830 } 00:20:29.830 EOF 00:20:29.830 )") 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:20:29.830 04:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:29.830 "params": { 00:20:29.830 "name": "Nvme1", 00:20:29.830 "trtype": "tcp", 00:20:29.830 "traddr": "10.0.0.2", 00:20:29.831 "adrfam": "ipv4", 00:20:29.831 "trsvcid": "4420", 00:20:29.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:29.831 "hdgst": false, 00:20:29.831 "ddgst": false 00:20:29.831 }, 00:20:29.831 "method": "bdev_nvme_attach_controller" 00:20:29.831 },{ 00:20:29.831 "params": { 00:20:29.831 "name": "Nvme2", 00:20:29.831 "trtype": "tcp", 00:20:29.831 "traddr": "10.0.0.2", 00:20:29.831 "adrfam": "ipv4", 00:20:29.831 "trsvcid": "4420", 00:20:29.831 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:29.831 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:29.831 "hdgst": false, 00:20:29.831 "ddgst": false 00:20:29.831 }, 00:20:29.831 "method": "bdev_nvme_attach_controller" 00:20:29.831 },{ 00:20:29.831 "params": { 00:20:29.831 "name": "Nvme3", 00:20:29.831 "trtype": "tcp", 00:20:29.831 "traddr": "10.0.0.2", 00:20:29.831 "adrfam": "ipv4", 00:20:29.831 "trsvcid": "4420", 00:20:29.831 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:29.831 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:29.831 "hdgst": false, 00:20:29.831 "ddgst": false 00:20:29.831 }, 00:20:29.831 "method": "bdev_nvme_attach_controller" 00:20:29.831 },{ 00:20:29.831 "params": { 00:20:29.831 "name": "Nvme4", 00:20:29.831 "trtype": "tcp", 00:20:29.831 "traddr": "10.0.0.2", 00:20:29.831 "adrfam": "ipv4", 00:20:29.831 "trsvcid": "4420", 00:20:29.831 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:29.831 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:29.831 "hdgst": false, 00:20:29.831 "ddgst": false 00:20:29.831 }, 00:20:29.831 "method": "bdev_nvme_attach_controller" 00:20:29.831 },{ 00:20:29.831 "params": { 00:20:29.831 "name": "Nvme5", 00:20:29.831 "trtype": "tcp", 00:20:29.831 "traddr": "10.0.0.2", 00:20:29.831 "adrfam": "ipv4", 00:20:29.831 "trsvcid": "4420", 00:20:29.831 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:29.831 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:29.831 "hdgst": false, 00:20:29.831 "ddgst": false 00:20:29.831 }, 00:20:29.831 "method": "bdev_nvme_attach_controller" 00:20:29.831 },{ 00:20:29.831 "params": { 00:20:29.831 "name": "Nvme6", 00:20:29.831 "trtype": "tcp", 00:20:29.831 "traddr": "10.0.0.2", 00:20:29.831 "adrfam": "ipv4", 00:20:29.831 "trsvcid": "4420", 00:20:29.831 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:29.831 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:29.831 "hdgst": false, 00:20:29.831 "ddgst": false 00:20:29.831 }, 00:20:29.831 "method": "bdev_nvme_attach_controller" 00:20:29.831 },{ 00:20:29.831 "params": { 00:20:29.831 "name": "Nvme7", 00:20:29.831 "trtype": "tcp", 00:20:29.831 "traddr": "10.0.0.2", 00:20:29.831 "adrfam": "ipv4", 00:20:29.831 "trsvcid": "4420", 00:20:29.831 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:29.831 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:29.831 "hdgst": false, 00:20:29.831 "ddgst": false 00:20:29.831 }, 00:20:29.831 "method": "bdev_nvme_attach_controller" 00:20:29.831 },{ 00:20:29.831 "params": { 00:20:29.831 "name": "Nvme8", 00:20:29.831 "trtype": "tcp", 00:20:29.831 "traddr": "10.0.0.2", 00:20:29.831 "adrfam": "ipv4", 00:20:29.831 "trsvcid": "4420", 00:20:29.831 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:29.831 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:29.831 "hdgst": false, 00:20:29.831 "ddgst": false 00:20:29.831 }, 00:20:29.831 "method": "bdev_nvme_attach_controller" 00:20:29.831 },{ 00:20:29.831 "params": { 00:20:29.831 "name": "Nvme9", 00:20:29.831 "trtype": "tcp", 00:20:29.831 "traddr": "10.0.0.2", 00:20:29.831 "adrfam": "ipv4", 00:20:29.831 "trsvcid": "4420", 00:20:29.831 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:29.831 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:29.831 "hdgst": false, 00:20:29.831 "ddgst": false 00:20:29.831 }, 00:20:29.831 "method": "bdev_nvme_attach_controller" 00:20:29.831 },{ 00:20:29.831 "params": { 00:20:29.831 "name": "Nvme10", 00:20:29.831 "trtype": "tcp", 00:20:29.831 "traddr": "10.0.0.2", 00:20:29.831 "adrfam": "ipv4", 00:20:29.831 "trsvcid": "4420", 00:20:29.831 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:29.831 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:29.831 "hdgst": false, 00:20:29.831 "ddgst": false 00:20:29.831 }, 00:20:29.831 "method": "bdev_nvme_attach_controller" 00:20:29.831 }' 00:20:29.831 [2024-12-09 04:10:58.317637] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:20:29.831 [2024-12-09 04:10:58.317721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid277168 ] 00:20:29.831 [2024-12-09 04:10:58.393016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.089 [2024-12-09 04:10:58.454390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.463 Running I/O for 1 seconds... 00:20:32.397 1800.00 IOPS, 112.50 MiB/s 00:20:32.397 Latency(us) 00:20:32.397 [2024-12-09T03:11:00.973Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.397 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.397 Verification LBA range: start 0x0 length 0x400 00:20:32.397 Nvme1n1 : 1.09 234.62 14.66 0.00 0.00 265725.91 20486.07 253211.69 00:20:32.397 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.397 Verification LBA range: start 0x0 length 0x400 00:20:32.397 Nvme2n1 : 1.15 226.79 14.17 0.00 0.00 274357.04 4102.07 243891.01 00:20:32.397 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.397 Verification LBA range: start 0x0 length 0x400 00:20:32.397 Nvme3n1 : 1.11 230.95 14.43 0.00 0.00 265145.84 19418.07 251658.24 00:20:32.397 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.397 Verification LBA range: start 0x0 length 0x400 00:20:32.397 Nvme4n1 : 1.10 232.17 14.51 0.00 0.00 258559.05 23787.14 236123.78 00:20:32.397 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.397 Verification LBA range: start 0x0 length 0x400 00:20:32.397 Nvme5n1 : 1.15 222.45 13.90 0.00 0.00 266694.16 19806.44 259425.47 00:20:32.397 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.397 Verification LBA range: start 0x0 length 0x400 00:20:32.397 Nvme6n1 : 1.13 225.61 14.10 0.00 0.00 254663.87 19029.71 257872.02 00:20:32.397 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.397 Verification LBA range: start 0x0 length 0x400 00:20:32.397 Nvme7n1 : 1.16 221.04 13.81 0.00 0.00 259365.74 18738.44 268746.15 00:20:32.397 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.397 Verification LBA range: start 0x0 length 0x400 00:20:32.397 Nvme8n1 : 1.17 273.61 17.10 0.00 0.00 205746.78 6747.78 253211.69 00:20:32.397 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.397 Verification LBA range: start 0x0 length 0x400 00:20:32.397 Nvme9n1 : 1.16 220.18 13.76 0.00 0.00 251539.15 21359.88 284280.60 00:20:32.397 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:32.397 Verification LBA range: start 0x0 length 0x400 00:20:32.397 Nvme10n1 : 1.21 212.20 13.26 0.00 0.00 248715.38 22524.97 268746.15 00:20:32.397 [2024-12-09T03:11:00.973Z] =================================================================================================================== 00:20:32.397 [2024-12-09T03:11:00.973Z] Total : 2299.62 143.73 0.00 0.00 253879.96 4102.07 284280.60 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:32.656 rmmod nvme_tcp 00:20:32.656 rmmod nvme_fabrics 00:20:32.656 rmmod nvme_keyring 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 276686 ']' 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 276686 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 276686 ']' 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 276686 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 276686 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 276686' 00:20:32.656 killing process with pid 276686 00:20:32.656 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 276686 00:20:32.657 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 276686 00:20:33.223 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:33.223 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:33.223 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:33.223 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:20:33.223 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:33.223 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:20:33.223 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:20:33.223 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:33.223 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:33.223 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.223 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:33.223 04:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:35.762 00:20:35.762 real 0m11.574s 00:20:35.762 user 0m33.368s 00:20:35.762 sys 0m3.157s 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:35.762 ************************************ 00:20:35.762 END TEST nvmf_shutdown_tc1 00:20:35.762 ************************************ 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:35.762 ************************************ 00:20:35.762 START TEST nvmf_shutdown_tc2 00:20:35.762 ************************************ 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:35.762 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:35.762 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:35.762 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:35.763 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:35.763 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:35.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:20:35.763 00:20:35.763 --- 10.0.0.2 ping statistics --- 00:20:35.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.763 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:35.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:20:35.763 00:20:35.763 --- 10.0.0.1 ping statistics --- 00:20:35.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.763 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=277930 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 277930 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 277930 ']' 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:35.763 04:11:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:35.763 [2024-12-09 04:11:03.997110] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:20:35.763 [2024-12-09 04:11:03.997185] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.763 [2024-12-09 04:11:04.076094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:35.763 [2024-12-09 04:11:04.135140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.763 [2024-12-09 04:11:04.135193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.763 [2024-12-09 04:11:04.135216] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.763 [2024-12-09 04:11:04.135227] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.763 [2024-12-09 04:11:04.135237] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.763 [2024-12-09 04:11:04.136815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.763 [2024-12-09 04:11:04.136859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:35.763 [2024-12-09 04:11:04.136917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:35.764 [2024-12-09 04:11:04.136920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:35.764 [2024-12-09 04:11:04.287769] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.764 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:36.021 Malloc1 00:20:36.021 [2024-12-09 04:11:04.393756] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.021 Malloc2 00:20:36.021 Malloc3 00:20:36.021 Malloc4 00:20:36.021 Malloc5 00:20:36.278 Malloc6 00:20:36.278 Malloc7 00:20:36.278 Malloc8 00:20:36.278 Malloc9 00:20:36.278 Malloc10 00:20:36.278 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.278 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:36.278 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:36.278 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=278107 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 278107 /var/tmp/bdevperf.sock 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 278107 ']' 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.536 { 00:20:36.536 "params": { 00:20:36.536 "name": "Nvme$subsystem", 00:20:36.536 "trtype": "$TEST_TRANSPORT", 00:20:36.536 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.536 "adrfam": "ipv4", 00:20:36.536 "trsvcid": "$NVMF_PORT", 00:20:36.536 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.536 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.536 "hdgst": ${hdgst:-false}, 00:20:36.536 "ddgst": ${ddgst:-false} 00:20:36.536 }, 00:20:36.536 "method": "bdev_nvme_attach_controller" 00:20:36.536 } 00:20:36.536 EOF 00:20:36.536 )") 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.536 { 00:20:36.536 "params": { 00:20:36.536 "name": "Nvme$subsystem", 00:20:36.536 "trtype": "$TEST_TRANSPORT", 00:20:36.536 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.536 "adrfam": "ipv4", 00:20:36.536 "trsvcid": "$NVMF_PORT", 00:20:36.536 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.536 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.536 "hdgst": ${hdgst:-false}, 00:20:36.536 "ddgst": ${ddgst:-false} 00:20:36.536 }, 00:20:36.536 "method": "bdev_nvme_attach_controller" 00:20:36.536 } 00:20:36.536 EOF 00:20:36.536 )") 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.536 { 00:20:36.536 "params": { 00:20:36.536 "name": "Nvme$subsystem", 00:20:36.536 "trtype": "$TEST_TRANSPORT", 00:20:36.536 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.536 "adrfam": "ipv4", 00:20:36.536 "trsvcid": "$NVMF_PORT", 00:20:36.536 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.536 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.536 "hdgst": ${hdgst:-false}, 00:20:36.536 "ddgst": ${ddgst:-false} 00:20:36.536 }, 00:20:36.536 "method": "bdev_nvme_attach_controller" 00:20:36.536 } 00:20:36.536 EOF 00:20:36.536 )") 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.536 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.536 { 00:20:36.536 "params": { 00:20:36.536 "name": "Nvme$subsystem", 00:20:36.536 "trtype": "$TEST_TRANSPORT", 00:20:36.536 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.536 "adrfam": "ipv4", 00:20:36.536 "trsvcid": "$NVMF_PORT", 00:20:36.536 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.536 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.536 "hdgst": ${hdgst:-false}, 00:20:36.536 "ddgst": ${ddgst:-false} 00:20:36.536 }, 00:20:36.536 "method": "bdev_nvme_attach_controller" 00:20:36.536 } 00:20:36.536 EOF 00:20:36.536 )") 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.537 { 00:20:36.537 "params": { 00:20:36.537 "name": "Nvme$subsystem", 00:20:36.537 "trtype": "$TEST_TRANSPORT", 00:20:36.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.537 "adrfam": "ipv4", 00:20:36.537 "trsvcid": "$NVMF_PORT", 00:20:36.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.537 "hdgst": ${hdgst:-false}, 00:20:36.537 "ddgst": ${ddgst:-false} 00:20:36.537 }, 00:20:36.537 "method": "bdev_nvme_attach_controller" 00:20:36.537 } 00:20:36.537 EOF 00:20:36.537 )") 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.537 { 00:20:36.537 "params": { 00:20:36.537 "name": "Nvme$subsystem", 00:20:36.537 "trtype": "$TEST_TRANSPORT", 00:20:36.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.537 "adrfam": "ipv4", 00:20:36.537 "trsvcid": "$NVMF_PORT", 00:20:36.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.537 "hdgst": ${hdgst:-false}, 00:20:36.537 "ddgst": ${ddgst:-false} 00:20:36.537 }, 00:20:36.537 "method": "bdev_nvme_attach_controller" 00:20:36.537 } 00:20:36.537 EOF 00:20:36.537 )") 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.537 { 00:20:36.537 "params": { 00:20:36.537 "name": "Nvme$subsystem", 00:20:36.537 "trtype": "$TEST_TRANSPORT", 00:20:36.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.537 "adrfam": "ipv4", 00:20:36.537 "trsvcid": "$NVMF_PORT", 00:20:36.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.537 "hdgst": ${hdgst:-false}, 00:20:36.537 "ddgst": ${ddgst:-false} 00:20:36.537 }, 00:20:36.537 "method": "bdev_nvme_attach_controller" 00:20:36.537 } 00:20:36.537 EOF 00:20:36.537 )") 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.537 { 00:20:36.537 "params": { 00:20:36.537 "name": "Nvme$subsystem", 00:20:36.537 "trtype": "$TEST_TRANSPORT", 00:20:36.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.537 "adrfam": "ipv4", 00:20:36.537 "trsvcid": "$NVMF_PORT", 00:20:36.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.537 "hdgst": ${hdgst:-false}, 00:20:36.537 "ddgst": ${ddgst:-false} 00:20:36.537 }, 00:20:36.537 "method": "bdev_nvme_attach_controller" 00:20:36.537 } 00:20:36.537 EOF 00:20:36.537 )") 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.537 { 00:20:36.537 "params": { 00:20:36.537 "name": "Nvme$subsystem", 00:20:36.537 "trtype": "$TEST_TRANSPORT", 00:20:36.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.537 "adrfam": "ipv4", 00:20:36.537 "trsvcid": "$NVMF_PORT", 00:20:36.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.537 "hdgst": ${hdgst:-false}, 00:20:36.537 "ddgst": ${ddgst:-false} 00:20:36.537 }, 00:20:36.537 "method": "bdev_nvme_attach_controller" 00:20:36.537 } 00:20:36.537 EOF 00:20:36.537 )") 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:36.537 { 00:20:36.537 "params": { 00:20:36.537 "name": "Nvme$subsystem", 00:20:36.537 "trtype": "$TEST_TRANSPORT", 00:20:36.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:36.537 "adrfam": "ipv4", 00:20:36.537 "trsvcid": "$NVMF_PORT", 00:20:36.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:36.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:36.537 "hdgst": ${hdgst:-false}, 00:20:36.537 "ddgst": ${ddgst:-false} 00:20:36.537 }, 00:20:36.537 "method": "bdev_nvme_attach_controller" 00:20:36.537 } 00:20:36.537 EOF 00:20:36.537 )") 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:20:36.537 04:11:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:36.537 "params": { 00:20:36.537 "name": "Nvme1", 00:20:36.537 "trtype": "tcp", 00:20:36.537 "traddr": "10.0.0.2", 00:20:36.537 "adrfam": "ipv4", 00:20:36.537 "trsvcid": "4420", 00:20:36.537 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:36.537 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:36.537 "hdgst": false, 00:20:36.537 "ddgst": false 00:20:36.537 }, 00:20:36.537 "method": "bdev_nvme_attach_controller" 00:20:36.537 },{ 00:20:36.537 "params": { 00:20:36.537 "name": "Nvme2", 00:20:36.537 "trtype": "tcp", 00:20:36.537 "traddr": "10.0.0.2", 00:20:36.537 "adrfam": "ipv4", 00:20:36.537 "trsvcid": "4420", 00:20:36.537 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:36.537 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:36.537 "hdgst": false, 00:20:36.537 "ddgst": false 00:20:36.537 }, 00:20:36.537 "method": "bdev_nvme_attach_controller" 00:20:36.537 },{ 00:20:36.537 "params": { 00:20:36.537 "name": "Nvme3", 00:20:36.537 "trtype": "tcp", 00:20:36.537 "traddr": "10.0.0.2", 00:20:36.537 "adrfam": "ipv4", 00:20:36.537 "trsvcid": "4420", 00:20:36.537 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:36.537 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:36.537 "hdgst": false, 00:20:36.537 "ddgst": false 00:20:36.537 }, 00:20:36.537 "method": "bdev_nvme_attach_controller" 00:20:36.537 },{ 00:20:36.537 "params": { 00:20:36.537 "name": "Nvme4", 00:20:36.537 "trtype": "tcp", 00:20:36.537 "traddr": "10.0.0.2", 00:20:36.537 "adrfam": "ipv4", 00:20:36.537 "trsvcid": "4420", 00:20:36.537 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:36.537 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:36.537 "hdgst": false, 00:20:36.537 "ddgst": false 00:20:36.537 }, 00:20:36.537 "method": "bdev_nvme_attach_controller" 00:20:36.537 },{ 00:20:36.537 "params": { 00:20:36.537 "name": "Nvme5", 00:20:36.537 "trtype": "tcp", 00:20:36.537 "traddr": "10.0.0.2", 00:20:36.537 "adrfam": "ipv4", 00:20:36.537 "trsvcid": "4420", 00:20:36.537 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:36.537 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:36.537 "hdgst": false, 00:20:36.537 "ddgst": false 00:20:36.537 }, 00:20:36.537 "method": "bdev_nvme_attach_controller" 00:20:36.537 },{ 00:20:36.537 "params": { 00:20:36.537 "name": "Nvme6", 00:20:36.537 "trtype": "tcp", 00:20:36.537 "traddr": "10.0.0.2", 00:20:36.537 "adrfam": "ipv4", 00:20:36.537 "trsvcid": "4420", 00:20:36.537 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:36.537 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:36.537 "hdgst": false, 00:20:36.537 "ddgst": false 00:20:36.537 }, 00:20:36.537 "method": "bdev_nvme_attach_controller" 00:20:36.537 },{ 00:20:36.537 "params": { 00:20:36.537 "name": "Nvme7", 00:20:36.537 "trtype": "tcp", 00:20:36.537 "traddr": "10.0.0.2", 00:20:36.537 "adrfam": "ipv4", 00:20:36.537 "trsvcid": "4420", 00:20:36.537 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:36.537 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:36.537 "hdgst": false, 00:20:36.537 "ddgst": false 00:20:36.537 }, 00:20:36.537 "method": "bdev_nvme_attach_controller" 00:20:36.537 },{ 00:20:36.537 "params": { 00:20:36.537 "name": "Nvme8", 00:20:36.537 "trtype": "tcp", 00:20:36.537 "traddr": "10.0.0.2", 00:20:36.537 "adrfam": "ipv4", 00:20:36.537 "trsvcid": "4420", 00:20:36.537 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:36.537 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:36.537 "hdgst": false, 00:20:36.537 "ddgst": false 00:20:36.537 }, 00:20:36.537 "method": "bdev_nvme_attach_controller" 00:20:36.537 },{ 00:20:36.537 "params": { 00:20:36.537 "name": "Nvme9", 00:20:36.537 "trtype": "tcp", 00:20:36.537 "traddr": "10.0.0.2", 00:20:36.537 "adrfam": "ipv4", 00:20:36.537 "trsvcid": "4420", 00:20:36.537 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:36.537 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:36.537 "hdgst": false, 00:20:36.537 "ddgst": false 00:20:36.537 }, 00:20:36.537 "method": "bdev_nvme_attach_controller" 00:20:36.537 },{ 00:20:36.537 "params": { 00:20:36.537 "name": "Nvme10", 00:20:36.537 "trtype": "tcp", 00:20:36.537 "traddr": "10.0.0.2", 00:20:36.537 "adrfam": "ipv4", 00:20:36.537 "trsvcid": "4420", 00:20:36.537 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:36.537 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:36.537 "hdgst": false, 00:20:36.537 "ddgst": false 00:20:36.537 }, 00:20:36.537 "method": "bdev_nvme_attach_controller" 00:20:36.537 }' 00:20:36.537 [2024-12-09 04:11:04.921731] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:20:36.537 [2024-12-09 04:11:04.921817] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278107 ] 00:20:36.537 [2024-12-09 04:11:04.992864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.537 [2024-12-09 04:11:05.052240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.431 Running I/O for 10 seconds... 00:20:38.688 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.688 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:38.688 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:38.688 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.688 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:38.688 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.688 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:38.688 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:38.688 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:38.688 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:38.689 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:38.689 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:38.689 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:38.689 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:38.689 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:38.689 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.689 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:38.689 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.689 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:20:38.689 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:20:38.689 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:38.946 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:38.946 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:38.946 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:38.946 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:38.946 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.946 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:38.946 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.946 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:38.946 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:38.946 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 278107 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 278107 ']' 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 278107 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278107 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278107' 00:20:39.203 killing process with pid 278107 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 278107 00:20:39.203 04:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 278107 00:20:39.460 Received shutdown signal, test time was about 0.951401 seconds 00:20:39.460 00:20:39.460 Latency(us) 00:20:39.460 [2024-12-09T03:11:08.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.460 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.460 Verification LBA range: start 0x0 length 0x400 00:20:39.460 Nvme1n1 : 0.95 269.31 16.83 0.00 0.00 234875.26 20874.43 254765.13 00:20:39.460 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.460 Verification LBA range: start 0x0 length 0x400 00:20:39.460 Nvme2n1 : 0.94 271.69 16.98 0.00 0.00 227777.23 23592.96 240784.12 00:20:39.460 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.460 Verification LBA range: start 0x0 length 0x400 00:20:39.460 Nvme3n1 : 0.95 270.36 16.90 0.00 0.00 224888.79 18252.99 256318.58 00:20:39.460 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.460 Verification LBA range: start 0x0 length 0x400 00:20:39.460 Nvme4n1 : 0.93 278.29 17.39 0.00 0.00 212580.77 4927.34 250104.79 00:20:39.460 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.460 Verification LBA range: start 0x0 length 0x400 00:20:39.460 Nvme5n1 : 0.93 206.08 12.88 0.00 0.00 282668.50 39418.69 267192.70 00:20:39.460 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.460 Verification LBA range: start 0x0 length 0x400 00:20:39.460 Nvme6n1 : 0.92 208.27 13.02 0.00 0.00 273368.30 20680.25 245444.46 00:20:39.460 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.460 Verification LBA range: start 0x0 length 0x400 00:20:39.460 Nvme7n1 : 0.91 210.09 13.13 0.00 0.00 262992.53 33787.45 229910.00 00:20:39.460 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.460 Verification LBA range: start 0x0 length 0x400 00:20:39.460 Nvme8n1 : 0.91 211.07 13.19 0.00 0.00 257421.46 18252.99 250104.79 00:20:39.460 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.460 Verification LBA range: start 0x0 length 0x400 00:20:39.460 Nvme9n1 : 0.94 204.81 12.80 0.00 0.00 260848.96 22913.33 281173.71 00:20:39.460 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:39.460 Verification LBA range: start 0x0 length 0x400 00:20:39.460 Nvme10n1 : 0.91 215.91 13.49 0.00 0.00 236861.88 2633.58 243891.01 00:20:39.460 [2024-12-09T03:11:08.036Z] =================================================================================================================== 00:20:39.460 [2024-12-09T03:11:08.036Z] Total : 2345.89 146.62 0.00 0.00 244716.21 2633.58 281173.71 00:20:39.716 04:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 277930 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:40.648 rmmod nvme_tcp 00:20:40.648 rmmod nvme_fabrics 00:20:40.648 rmmod nvme_keyring 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 277930 ']' 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 277930 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 277930 ']' 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 277930 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 277930 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 277930' 00:20:40.648 killing process with pid 277930 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 277930 00:20:40.648 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 277930 00:20:41.214 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:41.214 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:41.214 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:41.214 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:20:41.214 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:20:41.214 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:41.214 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:20:41.214 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:41.214 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:41.214 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.214 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:41.214 04:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.122 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:43.122 00:20:43.122 real 0m7.875s 00:20:43.122 user 0m24.585s 00:20:43.122 sys 0m1.427s 00:20:43.122 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:43.122 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:43.122 ************************************ 00:20:43.122 END TEST nvmf_shutdown_tc2 00:20:43.122 ************************************ 00:20:43.122 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:43.122 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:43.122 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:43.122 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:43.383 ************************************ 00:20:43.383 START TEST nvmf_shutdown_tc3 00:20:43.383 ************************************ 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:43.383 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:43.383 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:43.383 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:43.384 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:43.384 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:43.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:20:43.384 00:20:43.384 --- 10.0.0.2 ping statistics --- 00:20:43.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.384 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:43.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:20:43.384 00:20:43.384 --- 10.0.0.1 ping statistics --- 00:20:43.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.384 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=279025 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 279025 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 279025 ']' 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.384 04:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:43.384 [2024-12-09 04:11:11.936745] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:20:43.384 [2024-12-09 04:11:11.936842] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:43.643 [2024-12-09 04:11:12.009323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:43.643 [2024-12-09 04:11:12.067963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:43.643 [2024-12-09 04:11:12.068036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:43.643 [2024-12-09 04:11:12.068049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:43.643 [2024-12-09 04:11:12.068060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:43.643 [2024-12-09 04:11:12.068068] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:43.643 [2024-12-09 04:11:12.069684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:43.643 [2024-12-09 04:11:12.069744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:43.643 [2024-12-09 04:11:12.069809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:43.643 [2024-12-09 04:11:12.069812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.643 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.643 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:43.643 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:43.643 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:43.643 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:43.643 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.643 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:43.643 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.644 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:43.902 [2024-12-09 04:11:12.220476] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.902 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:43.902 Malloc1 00:20:43.902 [2024-12-09 04:11:12.322592] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.902 Malloc2 00:20:43.902 Malloc3 00:20:43.902 Malloc4 00:20:44.160 Malloc5 00:20:44.160 Malloc6 00:20:44.160 Malloc7 00:20:44.160 Malloc8 00:20:44.160 Malloc9 00:20:44.160 Malloc10 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=279205 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 279205 /var/tmp/bdevperf.sock 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 279205 ']' 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.422 { 00:20:44.422 "params": { 00:20:44.422 "name": "Nvme$subsystem", 00:20:44.422 "trtype": "$TEST_TRANSPORT", 00:20:44.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.422 "adrfam": "ipv4", 00:20:44.422 "trsvcid": "$NVMF_PORT", 00:20:44.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.422 "hdgst": ${hdgst:-false}, 00:20:44.422 "ddgst": ${ddgst:-false} 00:20:44.422 }, 00:20:44.422 "method": "bdev_nvme_attach_controller" 00:20:44.422 } 00:20:44.422 EOF 00:20:44.422 )") 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.422 { 00:20:44.422 "params": { 00:20:44.422 "name": "Nvme$subsystem", 00:20:44.422 "trtype": "$TEST_TRANSPORT", 00:20:44.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.422 "adrfam": "ipv4", 00:20:44.422 "trsvcid": "$NVMF_PORT", 00:20:44.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.422 "hdgst": ${hdgst:-false}, 00:20:44.422 "ddgst": ${ddgst:-false} 00:20:44.422 }, 00:20:44.422 "method": "bdev_nvme_attach_controller" 00:20:44.422 } 00:20:44.422 EOF 00:20:44.422 )") 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.422 { 00:20:44.422 "params": { 00:20:44.422 "name": "Nvme$subsystem", 00:20:44.422 "trtype": "$TEST_TRANSPORT", 00:20:44.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.422 "adrfam": "ipv4", 00:20:44.422 "trsvcid": "$NVMF_PORT", 00:20:44.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.422 "hdgst": ${hdgst:-false}, 00:20:44.422 "ddgst": ${ddgst:-false} 00:20:44.422 }, 00:20:44.422 "method": "bdev_nvme_attach_controller" 00:20:44.422 } 00:20:44.422 EOF 00:20:44.422 )") 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.422 { 00:20:44.422 "params": { 00:20:44.422 "name": "Nvme$subsystem", 00:20:44.422 "trtype": "$TEST_TRANSPORT", 00:20:44.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.422 "adrfam": "ipv4", 00:20:44.422 "trsvcid": "$NVMF_PORT", 00:20:44.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.422 "hdgst": ${hdgst:-false}, 00:20:44.422 "ddgst": ${ddgst:-false} 00:20:44.422 }, 00:20:44.422 "method": "bdev_nvme_attach_controller" 00:20:44.422 } 00:20:44.422 EOF 00:20:44.422 )") 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.422 { 00:20:44.422 "params": { 00:20:44.422 "name": "Nvme$subsystem", 00:20:44.422 "trtype": "$TEST_TRANSPORT", 00:20:44.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.422 "adrfam": "ipv4", 00:20:44.422 "trsvcid": "$NVMF_PORT", 00:20:44.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.422 "hdgst": ${hdgst:-false}, 00:20:44.422 "ddgst": ${ddgst:-false} 00:20:44.422 }, 00:20:44.422 "method": "bdev_nvme_attach_controller" 00:20:44.422 } 00:20:44.422 EOF 00:20:44.422 )") 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.422 { 00:20:44.422 "params": { 00:20:44.422 "name": "Nvme$subsystem", 00:20:44.422 "trtype": "$TEST_TRANSPORT", 00:20:44.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.422 "adrfam": "ipv4", 00:20:44.422 "trsvcid": "$NVMF_PORT", 00:20:44.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.422 "hdgst": ${hdgst:-false}, 00:20:44.422 "ddgst": ${ddgst:-false} 00:20:44.422 }, 00:20:44.422 "method": "bdev_nvme_attach_controller" 00:20:44.422 } 00:20:44.422 EOF 00:20:44.422 )") 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.422 { 00:20:44.422 "params": { 00:20:44.422 "name": "Nvme$subsystem", 00:20:44.422 "trtype": "$TEST_TRANSPORT", 00:20:44.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.422 "adrfam": "ipv4", 00:20:44.422 "trsvcid": "$NVMF_PORT", 00:20:44.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.422 "hdgst": ${hdgst:-false}, 00:20:44.422 "ddgst": ${ddgst:-false} 00:20:44.422 }, 00:20:44.422 "method": "bdev_nvme_attach_controller" 00:20:44.422 } 00:20:44.422 EOF 00:20:44.422 )") 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.422 { 00:20:44.422 "params": { 00:20:44.422 "name": "Nvme$subsystem", 00:20:44.422 "trtype": "$TEST_TRANSPORT", 00:20:44.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.422 "adrfam": "ipv4", 00:20:44.422 "trsvcid": "$NVMF_PORT", 00:20:44.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.422 "hdgst": ${hdgst:-false}, 00:20:44.422 "ddgst": ${ddgst:-false} 00:20:44.422 }, 00:20:44.422 "method": "bdev_nvme_attach_controller" 00:20:44.422 } 00:20:44.422 EOF 00:20:44.422 )") 00:20:44.422 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:44.423 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.423 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.423 { 00:20:44.423 "params": { 00:20:44.423 "name": "Nvme$subsystem", 00:20:44.423 "trtype": "$TEST_TRANSPORT", 00:20:44.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.423 "adrfam": "ipv4", 00:20:44.423 "trsvcid": "$NVMF_PORT", 00:20:44.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.423 "hdgst": ${hdgst:-false}, 00:20:44.423 "ddgst": ${ddgst:-false} 00:20:44.423 }, 00:20:44.423 "method": "bdev_nvme_attach_controller" 00:20:44.423 } 00:20:44.423 EOF 00:20:44.423 )") 00:20:44.423 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:44.423 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:44.423 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:44.423 { 00:20:44.423 "params": { 00:20:44.423 "name": "Nvme$subsystem", 00:20:44.423 "trtype": "$TEST_TRANSPORT", 00:20:44.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:44.423 "adrfam": "ipv4", 00:20:44.423 "trsvcid": "$NVMF_PORT", 00:20:44.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:44.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:44.423 "hdgst": ${hdgst:-false}, 00:20:44.423 "ddgst": ${ddgst:-false} 00:20:44.423 }, 00:20:44.423 "method": "bdev_nvme_attach_controller" 00:20:44.423 } 00:20:44.423 EOF 00:20:44.423 )") 00:20:44.423 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:20:44.423 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:20:44.423 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:20:44.423 04:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:44.423 "params": { 00:20:44.423 "name": "Nvme1", 00:20:44.423 "trtype": "tcp", 00:20:44.423 "traddr": "10.0.0.2", 00:20:44.423 "adrfam": "ipv4", 00:20:44.423 "trsvcid": "4420", 00:20:44.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:44.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:44.423 "hdgst": false, 00:20:44.423 "ddgst": false 00:20:44.423 }, 00:20:44.423 "method": "bdev_nvme_attach_controller" 00:20:44.423 },{ 00:20:44.423 "params": { 00:20:44.423 "name": "Nvme2", 00:20:44.423 "trtype": "tcp", 00:20:44.423 "traddr": "10.0.0.2", 00:20:44.423 "adrfam": "ipv4", 00:20:44.423 "trsvcid": "4420", 00:20:44.423 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:44.423 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:44.423 "hdgst": false, 00:20:44.423 "ddgst": false 00:20:44.423 }, 00:20:44.423 "method": "bdev_nvme_attach_controller" 00:20:44.423 },{ 00:20:44.423 "params": { 00:20:44.423 "name": "Nvme3", 00:20:44.423 "trtype": "tcp", 00:20:44.423 "traddr": "10.0.0.2", 00:20:44.423 "adrfam": "ipv4", 00:20:44.423 "trsvcid": "4420", 00:20:44.423 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:44.423 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:44.423 "hdgst": false, 00:20:44.423 "ddgst": false 00:20:44.423 }, 00:20:44.423 "method": "bdev_nvme_attach_controller" 00:20:44.423 },{ 00:20:44.423 "params": { 00:20:44.423 "name": "Nvme4", 00:20:44.423 "trtype": "tcp", 00:20:44.423 "traddr": "10.0.0.2", 00:20:44.423 "adrfam": "ipv4", 00:20:44.423 "trsvcid": "4420", 00:20:44.423 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:44.423 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:44.423 "hdgst": false, 00:20:44.423 "ddgst": false 00:20:44.423 }, 00:20:44.423 "method": "bdev_nvme_attach_controller" 00:20:44.423 },{ 00:20:44.423 "params": { 00:20:44.423 "name": "Nvme5", 00:20:44.423 "trtype": "tcp", 00:20:44.423 "traddr": "10.0.0.2", 00:20:44.423 "adrfam": "ipv4", 00:20:44.423 "trsvcid": "4420", 00:20:44.423 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:44.423 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:44.423 "hdgst": false, 00:20:44.423 "ddgst": false 00:20:44.423 }, 00:20:44.423 "method": "bdev_nvme_attach_controller" 00:20:44.423 },{ 00:20:44.423 "params": { 00:20:44.423 "name": "Nvme6", 00:20:44.423 "trtype": "tcp", 00:20:44.423 "traddr": "10.0.0.2", 00:20:44.423 "adrfam": "ipv4", 00:20:44.423 "trsvcid": "4420", 00:20:44.423 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:44.423 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:44.423 "hdgst": false, 00:20:44.423 "ddgst": false 00:20:44.423 }, 00:20:44.423 "method": "bdev_nvme_attach_controller" 00:20:44.423 },{ 00:20:44.423 "params": { 00:20:44.423 "name": "Nvme7", 00:20:44.423 "trtype": "tcp", 00:20:44.423 "traddr": "10.0.0.2", 00:20:44.423 "adrfam": "ipv4", 00:20:44.423 "trsvcid": "4420", 00:20:44.423 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:44.423 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:44.423 "hdgst": false, 00:20:44.423 "ddgst": false 00:20:44.423 }, 00:20:44.423 "method": "bdev_nvme_attach_controller" 00:20:44.423 },{ 00:20:44.423 "params": { 00:20:44.423 "name": "Nvme8", 00:20:44.423 "trtype": "tcp", 00:20:44.423 "traddr": "10.0.0.2", 00:20:44.423 "adrfam": "ipv4", 00:20:44.423 "trsvcid": "4420", 00:20:44.423 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:44.423 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:44.423 "hdgst": false, 00:20:44.423 "ddgst": false 00:20:44.423 }, 00:20:44.423 "method": "bdev_nvme_attach_controller" 00:20:44.423 },{ 00:20:44.423 "params": { 00:20:44.423 "name": "Nvme9", 00:20:44.423 "trtype": "tcp", 00:20:44.423 "traddr": "10.0.0.2", 00:20:44.423 "adrfam": "ipv4", 00:20:44.423 "trsvcid": "4420", 00:20:44.423 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:44.423 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:44.423 "hdgst": false, 00:20:44.423 "ddgst": false 00:20:44.423 }, 00:20:44.423 "method": "bdev_nvme_attach_controller" 00:20:44.423 },{ 00:20:44.423 "params": { 00:20:44.423 "name": "Nvme10", 00:20:44.423 "trtype": "tcp", 00:20:44.423 "traddr": "10.0.0.2", 00:20:44.423 "adrfam": "ipv4", 00:20:44.423 "trsvcid": "4420", 00:20:44.423 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:44.423 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:44.423 "hdgst": false, 00:20:44.423 "ddgst": false 00:20:44.423 }, 00:20:44.423 "method": "bdev_nvme_attach_controller" 00:20:44.423 }' 00:20:44.423 [2024-12-09 04:11:12.819305] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:20:44.423 [2024-12-09 04:11:12.819384] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid279205 ] 00:20:44.423 [2024-12-09 04:11:12.892335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.423 [2024-12-09 04:11:12.951537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.795 Running I/O for 10 seconds... 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:20:46.361 04:11:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:46.618 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:46.618 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:46.618 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:46.618 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:46.618 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.618 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:46.618 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.618 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:46.618 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:46.618 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:20:46.618 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:20:46.618 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:20:46.618 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 279025 00:20:46.618 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 279025 ']' 00:20:46.618 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 279025 00:20:46.618 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:20:46.619 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.619 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279025 00:20:46.887 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:46.887 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:46.887 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279025' 00:20:46.887 killing process with pid 279025 00:20:46.887 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 279025 00:20:46.887 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 279025 00:20:46.887 [2024-12-09 04:11:15.229813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bad30 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.229984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bad30 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.230002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bad30 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.887 [2024-12-09 04:11:15.234911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.234925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.234938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.234951] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.234963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.234976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.234989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.235255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104be70 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.236770] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.236794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.236807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.236836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.236848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.236860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.236876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.236889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.236901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.236915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.236927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.236946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.236958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.236971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.236983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.236995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.888 [2024-12-09 04:11:15.237519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.237531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.237553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.237564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.237576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.237606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.237619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.237631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb200 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239776] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239916] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.239989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.240000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.240014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.240025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.240037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.240048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.240059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.240070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bb6d0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.241325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.241361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.241376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.241389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.241401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.241414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.241427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.241439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.241451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.241462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.241474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.241486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.241498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.889 [2024-12-09 04:11:15.241510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.241995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.242007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.242019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.242031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.242042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.242054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.242066] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.242078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.242089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.242101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.242112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.242124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bbbc0 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.242942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.242969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.242983] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.242996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.890 [2024-12-09 04:11:15.243440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243573] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.243791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc090 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.891 [2024-12-09 04:11:15.245228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.891 [2024-12-09 04:11:15.245247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.891 [2024-12-09 04:11:15.245249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.891 [2024-12-09 04:11:15.245282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.891 [2024-12-09 04:11:15.245310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.891 [2024-12-09 04:11:15.245323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.891 [2024-12-09 04:11:15.245335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.891 [2024-12-09 04:11:15.245348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812950 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.891 [2024-12-09 04:11:15.245448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-09 04:11:15.245461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.891 the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.891 [2024-12-09 04:11:15.245493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.891 [2024-12-09 04:11:15.245507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.891 [2024-12-09 04:11:15.245519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with [2024-12-09 04:11:15.245537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:20:46.891 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.891 [2024-12-09 04:11:15.245551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.891 [2024-12-09 04:11:15.245566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.891 [2024-12-09 04:11:15.245580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4310 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.891 [2024-12-09 04:11:15.245631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.892 [2024-12-09 04:11:15.245644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-09 04:11:15.245658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.892 the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.892 [2024-12-09 04:11:15.245685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.892 [2024-12-09 04:11:15.245698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-12-09 04:11:15.245712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:46.892 the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.892 [2024-12-09 04:11:15.245729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-09 04:11:15.245743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:46.892 the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-09 04:11:15.245757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.892 the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with [2024-12-09 04:11:15.245772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c1c80 is same the state(6) to be set 00:20:46.892 with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245825] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.892 [2024-12-09 04:11:15.245838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.892 [2024-12-09 04:11:15.245852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.892 [2024-12-09 04:11:15.245865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.892 [2024-12-09 04:11:15.245879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.892 [2024-12-09 04:11:15.245892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with [2024-12-09 04:11:15.245909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:20:46.892 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.892 [2024-12-09 04:11:15.245924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.892 [2024-12-09 04:11:15.245937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.892 [2024-12-09 04:11:15.245950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4c90 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245976] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.245988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.246001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.246013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.246025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with [2024-12-09 04:11:15.246021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(6) to be set 00:20:46.892 id:0 cdw10:00000000 cdw11:00000000 00:20:46.892 [2024-12-09 04:11:15.246041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.246044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.892 [2024-12-09 04:11:15.246054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.246060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.892 [2024-12-09 04:11:15.246067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.246075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.892 [2024-12-09 04:11:15.246080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.246089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.892 [2024-12-09 04:11:15.246092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.246104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-09 04:11:15.246106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.892 the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.246124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-12-09 04:11:15.246125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc410 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:46.892 the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.246140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.892 [2024-12-09 04:11:15.246154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8130 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.246199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.892 [2024-12-09 04:11:15.246220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.892 [2024-12-09 04:11:15.246235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.892 [2024-12-09 04:11:15.246248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.892 [2024-12-09 04:11:15.246262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.892 [2024-12-09 04:11:15.246285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.892 [2024-12-09 04:11:15.246312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.892 [2024-12-09 04:11:15.246326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.892 [2024-12-09 04:11:15.246338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3e80 is same with the state(6) to be set 00:20:46.892 [2024-12-09 04:11:15.246613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.892 [2024-12-09 04:11:15.246639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.892 [2024-12-09 04:11:15.246667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.892 [2024-12-09 04:11:15.246683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.892 [2024-12-09 04:11:15.246701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.892 [2024-12-09 04:11:15.246715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.892 [2024-12-09 04:11:15.246731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.892 [2024-12-09 04:11:15.246745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.246762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.246775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.246792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.246806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.246827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.246842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.246858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.246872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.246888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.246902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.246918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.246932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.246947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.246961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.246977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.246992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc790 is same with the state(6) to be set 00:20:46.893 [2024-12-09 04:11:15.247396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.893 [2024-12-09 04:11:15.247957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.893 [2024-12-09 04:11:15.247971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.247991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.894 [2024-12-09 04:11:15.248005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.248004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bcc60 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.248022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.894 [2024-12-09 04:11:15.248033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bcc60 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.248037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.248049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bcc60 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.248054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.894 [2024-12-09 04:11:15.248064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bcc60 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.248068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.248077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bcc60 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.248085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.894 [2024-12-09 04:11:15.248090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bcc60 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.248099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.248103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bcc60 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.248115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.894 [2024-12-09 04:11:15.248130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.248146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.894 [2024-12-09 04:11:15.248160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.248176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.894 [2024-12-09 04:11:15.248191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.248208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.894 [2024-12-09 04:11:15.248222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.248237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.894 [2024-12-09 04:11:15.248251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.248268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.894 [2024-12-09 04:11:15.248295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.248329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.894 [2024-12-09 04:11:15.248343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.248359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.894 [2024-12-09 04:11:15.248373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.248389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.894 [2024-12-09 04:11:15.248403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.248424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.894 [2024-12-09 04:11:15.248439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.248454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.894 [2024-12-09 04:11:15.248469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.248485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.894 [2024-12-09 04:11:15.248499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.248515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.894 [2024-12-09 04:11:15.248539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.248572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.894 [2024-12-09 04:11:15.248585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.248601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.894 [2024-12-09 04:11:15.248614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.248629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.894 [2024-12-09 04:11:15.248642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.248658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.894 [2024-12-09 04:11:15.248671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.894 [2024-12-09 04:11:15.248714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.894 [2024-12-09 04:11:15.248816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.248852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.248883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.248895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.248908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.248920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.248932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.248945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.248958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.248970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.248982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.248994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.249006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.249018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.249031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.249043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.249062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.249076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.249088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.249100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.249112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.249124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.249135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.249147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.249160] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.249171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.249183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.894 [2024-12-09 04:11:15.249195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with [2024-12-09 04:11:15.249594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:12the state(6) to be set 00:20:46.895 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.249617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.249629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.249654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.249666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x104b9a0 is same with the state(6) to be set 00:20:46.895 [2024-12-09 04:11:15.249676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.249691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.249708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.249731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.249757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.249773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.249795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.249811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.249826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.249841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.249856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.249870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.249886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.249900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.249920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.249935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.249951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.249966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.249982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.249996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.250012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.250026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.250043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.250057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.250072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.250086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.250102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.250116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.250132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.250146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.250162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.250176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.250191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.250205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.250221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.250235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.250256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.250280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.250305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.250325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.895 [2024-12-09 04:11:15.250342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.895 [2024-12-09 04:11:15.250357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.250372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.250387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.250403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.250417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.250432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.250446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.250461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.250476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.250491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.250505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.250520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.250537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.250553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.250567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.250582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.250596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.250612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.250626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.250642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.250656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.250672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.250686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.250705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.250720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.250736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.250750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.250771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.250786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.250807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.250837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.250853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.250867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.250882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.250896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.250911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.250925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.250940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.250953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.250968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.250982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.250997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.251010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.251025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.251039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.251054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.251067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.251082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.251099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.251115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.277154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.277263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.277291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.277310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.277325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.277342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.277357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.277373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.277387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.277405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.277420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.277439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.277453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.277470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.277485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.277501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.277516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.277532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.896 [2024-12-09 04:11:15.277547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.896 [2024-12-09 04:11:15.277564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.897 [2024-12-09 04:11:15.277578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.277596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.897 [2024-12-09 04:11:15.277611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.277644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.897 [2024-12-09 04:11:15.277660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.277677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.897 [2024-12-09 04:11:15.277691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.277707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.897 [2024-12-09 04:11:15.277721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.277738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.897 [2024-12-09 04:11:15.277753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.277768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.897 [2024-12-09 04:11:15.277783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.277861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:46.897 [2024-12-09 04:11:15.278348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.897 [2024-12-09 04:11:15.278373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.278390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.897 [2024-12-09 04:11:15.278405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.278421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.897 [2024-12-09 04:11:15.278434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.278449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.897 [2024-12-09 04:11:15.278463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.278477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181bda0 is same with the state(6) to be set 00:20:46.897 [2024-12-09 04:11:15.278508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1812950 (9): Bad file descriptor 00:20:46.897 [2024-12-09 04:11:15.278565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.897 [2024-12-09 04:11:15.278586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.278609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.897 [2024-12-09 04:11:15.278634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.278658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.897 [2024-12-09 04:11:15.278673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.278688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.897 [2024-12-09 04:11:15.278702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.278716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810520 is same with the state(6) to be set 00:20:46.897 [2024-12-09 04:11:15.278749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c4310 (9): Bad file descriptor 00:20:46.897 [2024-12-09 04:11:15.278778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c1c80 (9): Bad file descriptor 00:20:46.897 [2024-12-09 04:11:15.278807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f4c90 (9): Bad file descriptor 00:20:46.897 [2024-12-09 04:11:15.278859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.897 [2024-12-09 04:11:15.278880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.278896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.897 [2024-12-09 04:11:15.278910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.278925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.897 [2024-12-09 04:11:15.278940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.278955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.897 [2024-12-09 04:11:15.278969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.278983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0e60 is same with the state(6) to be set 00:20:46.897 [2024-12-09 04:11:15.279035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.897 [2024-12-09 04:11:15.279058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.279073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.897 [2024-12-09 04:11:15.279088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.279103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.897 [2024-12-09 04:11:15.279117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.279132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:46.897 [2024-12-09 04:11:15.279146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.279159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132c110 is same with the state(6) to be set 00:20:46.897 [2024-12-09 04:11:15.279199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b8130 (9): Bad file descriptor 00:20:46.897 [2024-12-09 04:11:15.279233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c3e80 (9): Bad file descriptor 00:20:46.897 [2024-12-09 04:11:15.297798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:46.897 [2024-12-09 04:11:15.298009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181bda0 (9): Bad file descriptor 00:20:46.897 [2024-12-09 04:11:15.298057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1810520 (9): Bad file descriptor 00:20:46.897 [2024-12-09 04:11:15.298099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c0e60 (9): Bad file descriptor 00:20:46.897 [2024-12-09 04:11:15.298139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x132c110 (9): Bad file descriptor 00:20:46.897 [2024-12-09 04:11:15.299513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:46.897 task offset: 27264 on job bdev=Nvme1n1 fails 00:20:46.897 1741.00 IOPS, 108.81 MiB/s [2024-12-09T03:11:15.473Z] [2024-12-09 04:11:15.299770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.897 [2024-12-09 04:11:15.299808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c4310 with addr=10.0.0.2, port=4420 00:20:46.897 [2024-12-09 04:11:15.299828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4310 is same with the state(6) to be set 00:20:46.897 [2024-12-09 04:11:15.300261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.897 [2024-12-09 04:11:15.300301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.300328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.897 [2024-12-09 04:11:15.300345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.300362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.897 [2024-12-09 04:11:15.300387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.300403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.897 [2024-12-09 04:11:15.300418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.300445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.897 [2024-12-09 04:11:15.300460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.300477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.897 [2024-12-09 04:11:15.300492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.300509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.897 [2024-12-09 04:11:15.300523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.897 [2024-12-09 04:11:15.300540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.897 [2024-12-09 04:11:15.300571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.300589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.300604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.300620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.300635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.300652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.300666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.300682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.300697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.300714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.300728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.300745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.300760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.300777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.300791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.300808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.300823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.300840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.300854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.300871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.300886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.300904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.300919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.300936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.300951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.300972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.300987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.898 [2024-12-09 04:11:15.301777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.898 [2024-12-09 04:11:15.301793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.301809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.301823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.301840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.301855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.301872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.301886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.301902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.301917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.301934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.301948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.301964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.301979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.301996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.302010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.302026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.302041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.302057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.302071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.302087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.302103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.302119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.302133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.302150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.302169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.302187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.302202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.302218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.302234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.302250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.302265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.302290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.302305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.302322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.302336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.302363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.302378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.303718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.303750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.303775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.303792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.303810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.303825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.303842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.303856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.303874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.303889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.303905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.303920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.303942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.303957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.303974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.303989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.304005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.304020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.304036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.304051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.304068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.304082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.304098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.304113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.304130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.304145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.304161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.304177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.304194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.304208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.304225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.304240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.304256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.304278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.304298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.304313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.304330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.304349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.304366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.304381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.304398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.899 [2024-12-09 04:11:15.304412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.899 [2024-12-09 04:11:15.304429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.304444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.304460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.304475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.304492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.304506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.304522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.304537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.304553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.304568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.304585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.304599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.304616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.304631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.304649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.304663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.304680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.304695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.304711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.304726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.304746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.304762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.304779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.304793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.304810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.304825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.304842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.304856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.304873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.304887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.304906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.304921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.304938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.304952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.304969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.304984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.305000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.305015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.305032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.305046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.305063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.305079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.305095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.305110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.305126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.305145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.305162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.305177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.305193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.305207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.305224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.305239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.305255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.305277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.305296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.305311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.305328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.305343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.305359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.305373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.305390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.305404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.305422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.305436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.305453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.305468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.305485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.305499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.305515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.305530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.305551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.305567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.900 [2024-12-09 04:11:15.305584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.900 [2024-12-09 04:11:15.305598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.305616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.305630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.305647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.305662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.305679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.305693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.305710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.305724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.305741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.305755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.305772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.305786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.307153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.307179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.307203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.307219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.307236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.307252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.307268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.307292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.307311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.307331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.307348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c7f00 is same with the state(6) to be set 00:20:46.901 [2024-12-09 04:11:15.307836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.307861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.307883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.307907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.307925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.307939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.307955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.307970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.307987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.901 [2024-12-09 04:11:15.308709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.901 [2024-12-09 04:11:15.308726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.308740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.308756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.308771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.308788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.308802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.308819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.308833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.308850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.308864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.308880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.308896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.308912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.308926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.308943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.308957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.308974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.308988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.902 [2024-12-09 04:11:15.309787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.902 [2024-12-09 04:11:15.309808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.309823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.309843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.309863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.309879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.309893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.311137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:46.903 [2024-12-09 04:11:15.311171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:46.903 [2024-12-09 04:11:15.311344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.903 [2024-12-09 04:11:15.311374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x181bda0 with addr=10.0.0.2, port=4420 00:20:46.903 [2024-12-09 04:11:15.311391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181bda0 is same with the state(6) to be set 00:20:46.903 [2024-12-09 04:11:15.311418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c4310 (9): Bad file descriptor 00:20:46.903 [2024-12-09 04:11:15.311468] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:20:46.903 [2024-12-09 04:11:15.311504] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:20:46.903 [2024-12-09 04:11:15.311535] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:20:46.903 [2024-12-09 04:11:15.311556] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:20:46.903 [2024-12-09 04:11:15.311576] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:20:46.903 [2024-12-09 04:11:15.311595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181bda0 (9): Bad file descriptor 00:20:46.903 [2024-12-09 04:11:15.311666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.311689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.311710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.311726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.311742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.311757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.311773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.311788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.311805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.311818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.311841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.311857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.311873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.311888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.311906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.311921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.311938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.311959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.311976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.311991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.312023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.312054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.312085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.312116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.312147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.312177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.312208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.312242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.312291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.312325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.312357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.312387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.312419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.312451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.312487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.312520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.312551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.312582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.312613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.312644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.903 [2024-12-09 04:11:15.312681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.903 [2024-12-09 04:11:15.312698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.312712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.312728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.312743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.312760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.312774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.312790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.312804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.312820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.312835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.312851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.312866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.312882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.312896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.325382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.325452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.325471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.325486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.325503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.325520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.325537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.325552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.325568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.325598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.325615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.325630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.325646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.325662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.325678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.325693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.325708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.325723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.325740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.325754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.325771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.325785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.325801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.325816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.325832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.325848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.325864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.325878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.325896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.325911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.325928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.325942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.325959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.325973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.325994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.326010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.326026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.326042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.326059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.326073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.326090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.326104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.326120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.326134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.326151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.326166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.326182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.326197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.326213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.326227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.326244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.326258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.327680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.327705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.327730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.327746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.327764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.327778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.904 [2024-12-09 04:11:15.327795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.904 [2024-12-09 04:11:15.327815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.327832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.327847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.327863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.327878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.327894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.327909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.327926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.327941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.327957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.327971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.327988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.905 [2024-12-09 04:11:15.328827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.905 [2024-12-09 04:11:15.328843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.328857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.328873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.328887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.328904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.328918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.328934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.328949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.328965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.328981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.329697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.906 [2024-12-09 04:11:15.329711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.906 [2024-12-09 04:11:15.331010] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:46.906 [2024-12-09 04:11:15.331960] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:46.906 [2024-12-09 04:11:15.332034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:46.906 [2024-12-09 04:11:15.332082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:46.906 [2024-12-09 04:11:15.332283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.906 [2024-12-09 04:11:15.332324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c1c80 with addr=10.0.0.2, port=4420 00:20:46.906 [2024-12-09 04:11:15.332347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c1c80 is same with the state(6) to be set 00:20:46.906 [2024-12-09 04:11:15.332434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.906 [2024-12-09 04:11:15.332459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f4c90 with addr=10.0.0.2, port=4420 00:20:46.906 [2024-12-09 04:11:15.332476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4c90 is same with the state(6) to be set 00:20:46.906 [2024-12-09 04:11:15.332495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:46.906 [2024-12-09 04:11:15.332509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:46.906 [2024-12-09 04:11:15.332526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:46.906 [2024-12-09 04:11:15.332544] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:46.906 [2024-12-09 04:11:15.332601] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:20:46.906 [2024-12-09 04:11:15.332626] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:20:46.906 [2024-12-09 04:11:15.332646] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:20:46.907 [2024-12-09 04:11:15.332665] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:20:46.907 [2024-12-09 04:11:15.332696] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:20:46.907 [2024-12-09 04:11:15.332723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f4c90 (9): Bad file descriptor 00:20:46.907 [2024-12-09 04:11:15.332749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c1c80 (9): Bad file descriptor 00:20:46.907 [2024-12-09 04:11:15.333424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.333450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.333475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.333490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.333508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.333524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.333540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.333555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.333571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.333585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.333602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.333623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.333640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.333655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.333671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.333688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.333704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.333719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.333735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.333750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.333767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.333782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.333798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.333812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.333829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.333843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.333860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.333875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.333891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.333905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.333921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.333935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.333952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.333966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.333982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.333997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.334023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.334039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.334055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.334069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.334086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.334101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.334117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.334131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.334148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.334162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.334179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.334194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.334210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.334224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.334240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.334254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.334279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.334296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.334313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.334328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.334345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.334359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.334375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.334390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.334406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.334425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.334442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.334456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.334473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.334487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.334504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.334519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.334536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.334550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.907 [2024-12-09 04:11:15.334566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.907 [2024-12-09 04:11:15.334580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.334597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.334611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.334627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.334641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.334657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.334672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.334688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.334703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.334718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.334732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.334749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.334764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.334780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.334794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.334814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.334831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.334847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.334861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.334878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.334892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.334908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.334923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.334940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.334954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.334971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.334985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.335002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.335016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.335032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.335047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.335063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.335077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.335094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.335108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.335124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.335138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.335154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.335169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.335185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.335204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.335221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.335236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.335253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.335268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.335291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.335306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.335323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.335338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.335354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.335369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.335385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.335399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.335415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.335429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.335445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.335460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.335474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c6c40 is same with the state(6) to be set 00:20:46.908 [2024-12-09 04:11:15.336754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.336778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.336799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.336815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.336832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.336846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.336863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.336883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.908 [2024-12-09 04:11:15.336901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.908 [2024-12-09 04:11:15.336916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.336932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.336946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.336962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.336977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.336994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.337984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.337999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.338015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.338030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.338047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.338061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.338081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.338097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.338114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.909 [2024-12-09 04:11:15.338129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.909 [2024-12-09 04:11:15.338145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.910 [2024-12-09 04:11:15.338160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.910 [2024-12-09 04:11:15.338177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.910 [2024-12-09 04:11:15.338192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.910 [2024-12-09 04:11:15.338208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.910 [2024-12-09 04:11:15.338222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.910 [2024-12-09 04:11:15.338238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.910 [2024-12-09 04:11:15.338254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.910 [2024-12-09 04:11:15.338277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.910 [2024-12-09 04:11:15.338294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.910 [2024-12-09 04:11:15.338310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.910 [2024-12-09 04:11:15.338325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.910 [2024-12-09 04:11:15.338342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.910 [2024-12-09 04:11:15.338357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.910 [2024-12-09 04:11:15.338373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.910 [2024-12-09 04:11:15.338387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.910 [2024-12-09 04:11:15.338403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.910 [2024-12-09 04:11:15.338418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.910 [2024-12-09 04:11:15.338435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.910 [2024-12-09 04:11:15.338449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.910 [2024-12-09 04:11:15.338465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.910 [2024-12-09 04:11:15.338483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.910 [2024-12-09 04:11:15.338501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.910 [2024-12-09 04:11:15.338516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.910 [2024-12-09 04:11:15.338533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.910 [2024-12-09 04:11:15.338547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.910 [2024-12-09 04:11:15.338563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.910 [2024-12-09 04:11:15.338577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.910 [2024-12-09 04:11:15.338594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.910 [2024-12-09 04:11:15.338609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.910 [2024-12-09 04:11:15.338626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.910 [2024-12-09 04:11:15.338640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.910 [2024-12-09 04:11:15.338656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.910 [2024-12-09 04:11:15.338671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.910 [2024-12-09 04:11:15.338688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.910 [2024-12-09 04:11:15.338702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.910 [2024-12-09 04:11:15.338718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.910 [2024-12-09 04:11:15.338732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.910 [2024-12-09 04:11:15.338749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.910 [2024-12-09 04:11:15.338763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.910 [2024-12-09 04:11:15.338780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:46.910 [2024-12-09 04:11:15.338794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:46.910 [2024-12-09 04:11:15.338809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17ca480 is same with the state(6) to be set 00:20:46.910 [2024-12-09 04:11:15.340742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:46.910 [2024-12-09 04:11:15.340799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:46.910 [2024-12-09 04:11:15.340822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:46.910 00:20:46.910 Latency(us) 00:20:46.910 [2024-12-09T03:11:15.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.910 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:46.910 Job: Nvme1n1 ended in about 1.02 seconds with error 00:20:46.910 Verification LBA range: start 0x0 length 0x400 00:20:46.910 Nvme1n1 : 1.02 187.96 11.75 62.65 0.00 252782.36 19903.53 265639.25 00:20:46.910 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:46.910 Job: Nvme2n1 ended in about 1.07 seconds with error 00:20:46.910 Verification LBA range: start 0x0 length 0x400 00:20:46.910 Nvme2n1 : 1.07 179.97 11.25 59.99 0.00 259527.68 20874.43 239230.67 00:20:46.910 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:46.910 Job: Nvme3n1 ended in about 1.04 seconds with error 00:20:46.910 Verification LBA range: start 0x0 length 0x400 00:20:46.910 Nvme3n1 : 1.04 184.09 11.51 61.36 0.00 248914.87 18058.81 254765.13 00:20:46.910 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:46.910 Job: Nvme4n1 ended in about 1.07 seconds with error 00:20:46.910 Verification LBA range: start 0x0 length 0x400 00:20:46.910 Nvme4n1 : 1.07 183.14 11.45 59.80 0.00 247224.57 18447.17 254765.13 00:20:46.910 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:46.910 Job: Nvme5n1 ended in about 1.05 seconds with error 00:20:46.910 Verification LBA range: start 0x0 length 0x400 00:20:46.910 Nvme5n1 : 1.05 187.31 11.71 61.16 0.00 236748.15 19126.80 256318.58 00:20:46.910 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:46.910 Job: Nvme6n1 ended in about 1.08 seconds with error 00:20:46.910 Verification LBA range: start 0x0 length 0x400 00:20:46.910 Nvme6n1 : 1.08 178.44 11.15 59.48 0.00 243249.30 22136.60 239230.67 00:20:46.910 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:46.910 Job: Nvme7n1 ended in about 1.07 seconds with error 00:20:46.910 Verification LBA range: start 0x0 length 0x400 00:20:46.910 Nvme7n1 : 1.07 179.22 11.20 4.67 0.00 299204.18 16796.63 302921.96 00:20:46.910 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:46.910 Job: Nvme8n1 ended in about 1.03 seconds with error 00:20:46.910 Verification LBA range: start 0x0 length 0x400 00:20:46.910 Nvme8n1 : 1.03 185.69 11.61 61.90 0.00 223875.79 20194.80 256318.58 00:20:46.910 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:46.910 Job: Nvme9n1 ended in about 1.08 seconds with error 00:20:46.910 Verification LBA range: start 0x0 length 0x400 00:20:46.910 Nvme9n1 : 1.08 118.59 7.41 59.30 0.00 307869.01 37476.88 296708.17 00:20:46.910 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:46.910 Job: Nvme10n1 ended in about 1.05 seconds with error 00:20:46.910 Verification LBA range: start 0x0 length 0x400 00:20:46.910 Nvme10n1 : 1.05 121.85 7.62 60.92 0.00 292327.47 20291.89 284280.60 00:20:46.910 [2024-12-09T03:11:15.486Z] =================================================================================================================== 00:20:46.910 [2024-12-09T03:11:15.487Z] Total : 1706.26 106.64 551.23 0.00 258072.85 16796.63 302921.96 00:20:46.911 [2024-12-09 04:11:15.368781] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:46.911 [2024-12-09 04:11:15.368891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:46.911 [2024-12-09 04:11:15.369214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.911 [2024-12-09 04:11:15.369254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1812950 with addr=10.0.0.2, port=4420 00:20:46.911 [2024-12-09 04:11:15.369283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1812950 is same with the state(6) to be set 00:20:46.911 [2024-12-09 04:11:15.369380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.911 [2024-12-09 04:11:15.369408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c3e80 with addr=10.0.0.2, port=4420 00:20:46.911 [2024-12-09 04:11:15.369440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c3e80 is same with the state(6) to be set 00:20:46.911 [2024-12-09 04:11:15.369462] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:46.911 [2024-12-09 04:11:15.369477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:46.911 [2024-12-09 04:11:15.369494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:46.911 [2024-12-09 04:11:15.369513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:46.911 [2024-12-09 04:11:15.370509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:46.911 [2024-12-09 04:11:15.370682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.911 [2024-12-09 04:11:15.370712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13b8130 with addr=10.0.0.2, port=4420 00:20:46.911 [2024-12-09 04:11:15.370729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13b8130 is same with the state(6) to be set 00:20:46.911 [2024-12-09 04:11:15.370829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.911 [2024-12-09 04:11:15.370854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x132c110 with addr=10.0.0.2, port=4420 00:20:46.911 [2024-12-09 04:11:15.370871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x132c110 is same with the state(6) to be set 00:20:46.911 [2024-12-09 04:11:15.370961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.911 [2024-12-09 04:11:15.370987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c0e60 with addr=10.0.0.2, port=4420 00:20:46.911 [2024-12-09 04:11:15.371004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c0e60 is same with the state(6) to be set 00:20:46.911 [2024-12-09 04:11:15.371093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.911 [2024-12-09 04:11:15.371120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1810520 with addr=10.0.0.2, port=4420 00:20:46.911 [2024-12-09 04:11:15.371136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1810520 is same with the state(6) to be set 00:20:46.911 [2024-12-09 04:11:15.371163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1812950 (9): Bad file descriptor 00:20:46.911 [2024-12-09 04:11:15.371185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c3e80 (9): Bad file descriptor 00:20:46.911 [2024-12-09 04:11:15.371203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:46.911 [2024-12-09 04:11:15.371217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:46.911 [2024-12-09 04:11:15.371232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:46.911 [2024-12-09 04:11:15.371247] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:46.911 [2024-12-09 04:11:15.371264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:46.911 [2024-12-09 04:11:15.371288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:46.911 [2024-12-09 04:11:15.371312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:46.911 [2024-12-09 04:11:15.371326] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:46.911 [2024-12-09 04:11:15.371373] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:20:46.911 [2024-12-09 04:11:15.371405] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:20:46.911 [2024-12-09 04:11:15.372138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.911 [2024-12-09 04:11:15.372169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c4310 with addr=10.0.0.2, port=4420 00:20:46.911 [2024-12-09 04:11:15.372186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c4310 is same with the state(6) to be set 00:20:46.911 [2024-12-09 04:11:15.372206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13b8130 (9): Bad file descriptor 00:20:46.911 [2024-12-09 04:11:15.372228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x132c110 (9): Bad file descriptor 00:20:46.911 [2024-12-09 04:11:15.372247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c0e60 (9): Bad file descriptor 00:20:46.911 [2024-12-09 04:11:15.372265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1810520 (9): Bad file descriptor 00:20:46.911 [2024-12-09 04:11:15.372292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:46.911 [2024-12-09 04:11:15.372305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:46.911 [2024-12-09 04:11:15.372319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:46.911 [2024-12-09 04:11:15.372333] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:46.911 [2024-12-09 04:11:15.372348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:46.911 [2024-12-09 04:11:15.372361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:46.911 [2024-12-09 04:11:15.372374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:46.911 [2024-12-09 04:11:15.372387] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:46.911 [2024-12-09 04:11:15.372480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:46.911 [2024-12-09 04:11:15.372506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:46.911 [2024-12-09 04:11:15.372524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:46.911 [2024-12-09 04:11:15.372565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c4310 (9): Bad file descriptor 00:20:46.911 [2024-12-09 04:11:15.372585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:46.911 [2024-12-09 04:11:15.372599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:46.911 [2024-12-09 04:11:15.372612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:46.911 [2024-12-09 04:11:15.372626] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:46.911 [2024-12-09 04:11:15.372641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:46.911 [2024-12-09 04:11:15.372654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:46.911 [2024-12-09 04:11:15.372666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:46.911 [2024-12-09 04:11:15.372679] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:46.911 [2024-12-09 04:11:15.372693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:46.911 [2024-12-09 04:11:15.372711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:46.911 [2024-12-09 04:11:15.372726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:46.911 [2024-12-09 04:11:15.372739] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:46.911 [2024-12-09 04:11:15.372752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:46.911 [2024-12-09 04:11:15.372765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:46.911 [2024-12-09 04:11:15.372779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:46.911 [2024-12-09 04:11:15.372792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:46.911 [2024-12-09 04:11:15.372915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.911 [2024-12-09 04:11:15.372943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x181bda0 with addr=10.0.0.2, port=4420 00:20:46.911 [2024-12-09 04:11:15.372960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x181bda0 is same with the state(6) to be set 00:20:46.911 [2024-12-09 04:11:15.373034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.911 [2024-12-09 04:11:15.373059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17f4c90 with addr=10.0.0.2, port=4420 00:20:46.911 [2024-12-09 04:11:15.373075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17f4c90 is same with the state(6) to be set 00:20:46.911 [2024-12-09 04:11:15.373156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:46.912 [2024-12-09 04:11:15.373182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x13c1c80 with addr=10.0.0.2, port=4420 00:20:46.912 [2024-12-09 04:11:15.373198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13c1c80 is same with the state(6) to be set 00:20:46.912 [2024-12-09 04:11:15.373213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:46.912 [2024-12-09 04:11:15.373227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:46.912 [2024-12-09 04:11:15.373241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:46.912 [2024-12-09 04:11:15.373255] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:46.912 [2024-12-09 04:11:15.373325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x181bda0 (9): Bad file descriptor 00:20:46.912 [2024-12-09 04:11:15.373352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17f4c90 (9): Bad file descriptor 00:20:46.912 [2024-12-09 04:11:15.373371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c1c80 (9): Bad file descriptor 00:20:46.912 [2024-12-09 04:11:15.373413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:46.912 [2024-12-09 04:11:15.373432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:46.912 [2024-12-09 04:11:15.373447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:46.912 [2024-12-09 04:11:15.373461] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:46.912 [2024-12-09 04:11:15.373475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:46.912 [2024-12-09 04:11:15.373489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:46.912 [2024-12-09 04:11:15.373508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:46.912 [2024-12-09 04:11:15.373521] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:46.912 [2024-12-09 04:11:15.373535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:46.912 [2024-12-09 04:11:15.373548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:46.912 [2024-12-09 04:11:15.373561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:46.912 [2024-12-09 04:11:15.373574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:47.476 04:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 279205 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 279205 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 279205 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:48.412 rmmod nvme_tcp 00:20:48.412 rmmod nvme_fabrics 00:20:48.412 rmmod nvme_keyring 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 279025 ']' 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 279025 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 279025 ']' 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 279025 00:20:48.412 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (279025) - No such process 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 279025 is not found' 00:20:48.412 Process with pid 279025 is not found 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:48.412 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:48.413 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:48.413 04:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.945 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:50.945 00:20:50.945 real 0m7.199s 00:20:50.945 user 0m17.076s 00:20:50.945 sys 0m1.453s 00:20:50.945 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.945 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:50.945 ************************************ 00:20:50.945 END TEST nvmf_shutdown_tc3 00:20:50.945 ************************************ 00:20:50.945 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:20:50.945 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:20:50.945 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:20:50.945 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:50.945 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.945 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:50.945 ************************************ 00:20:50.945 START TEST nvmf_shutdown_tc4 00:20:50.945 ************************************ 00:20:50.945 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:20:50.945 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:20:50.945 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:50.945 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:50.945 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:50.945 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:50.945 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:50.945 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:50.945 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:50.946 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:50.946 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:50.946 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:50.946 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:50.946 04:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:50.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:50.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:20:50.946 00:20:50.946 --- 10.0.0.2 ping statistics --- 00:20:50.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.946 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:50.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:50.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:20:50.946 00:20:50.946 --- 10.0.0.1 ping statistics --- 00:20:50.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:50.946 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=280107 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 280107 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 280107 ']' 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.946 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:50.946 [2024-12-09 04:11:19.272481] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:20:50.946 [2024-12-09 04:11:19.272578] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:50.946 [2024-12-09 04:11:19.356619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:50.946 [2024-12-09 04:11:19.418121] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:50.946 [2024-12-09 04:11:19.418172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:50.946 [2024-12-09 04:11:19.418185] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:50.946 [2024-12-09 04:11:19.418197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:50.946 [2024-12-09 04:11:19.418207] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:50.946 [2024-12-09 04:11:19.419678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.946 [2024-12-09 04:11:19.419704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:50.947 [2024-12-09 04:11:19.419761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:50.947 [2024-12-09 04:11:19.419765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:51.204 [2024-12-09 04:11:19.574391] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.204 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:51.205 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.205 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:51.205 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:51.205 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:51.205 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:51.205 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.205 04:11:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:51.205 Malloc1 00:20:51.205 [2024-12-09 04:11:19.674813] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.205 Malloc2 00:20:51.205 Malloc3 00:20:51.462 Malloc4 00:20:51.462 Malloc5 00:20:51.462 Malloc6 00:20:51.462 Malloc7 00:20:51.462 Malloc8 00:20:51.720 Malloc9 00:20:51.720 Malloc10 00:20:51.720 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.720 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:51.720 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:51.720 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:51.720 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=280280 00:20:51.720 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:20:51.720 04:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:20:51.720 [2024-12-09 04:11:20.212291] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:56.990 04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:56.990 04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 280107 00:20:56.990 04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 280107 ']' 00:20:56.990 04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 280107 00:20:56.990 04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:20:56.990 04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:56.990 04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280107 00:20:56.990 04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:56.990 04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:56.990 04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280107' 00:20:56.990 killing process with pid 280107 00:20:56.990 04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 280107 00:20:56.990 04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 280107 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 [2024-12-09 04:11:25.201128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 [2024-12-09 04:11:25.202247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.990 starting I/O failed: -6 00:20:56.990 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 [2024-12-09 04:11:25.203651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 [2024-12-09 04:11:25.205048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27008a0 is same with the state(6) to be set 00:20:56.991 [2024-12-09 04:11:25.205097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27008a0 is same with the state(6) to be set 00:20:56.991 [2024-12-09 04:11:25.205120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27008a0 is same with the state(6) to be set 00:20:56.991 [2024-12-09 04:11:25.205134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27008a0 is same with the state(6) to be set 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 [2024-12-09 04:11:25.205146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27008a0 is same with the state(6) to be set 00:20:56.991 starting I/O failed: -6 00:20:56.991 [2024-12-09 04:11:25.205158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27008a0 is same with the state(6) to be set 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 [2024-12-09 04:11:25.205440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:56.991 NVMe io qpair process completion error 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 starting I/O failed: -6 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.991 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 [2024-12-09 04:11:25.206732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:56.992 [2024-12-09 04:11:25.206871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2701c20 is same with the state(6) to be set 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 [2024-12-09 04:11:25.206899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2701c20 is same with the state(6) to be set 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 [2024-12-09 04:11:25.206914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2701c20 is same with the state(6) to be set 00:20:56.992 starting I/O failed: -6 00:20:56.992 [2024-12-09 04:11:25.206926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2701c20 is same with the state(6) to be set 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 [2024-12-09 04:11:25.206939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2701c20 is same with starting I/O failed: -6 00:20:56.992 the state(6) to be set 00:20:56.992 [2024-12-09 04:11:25.206952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2701c20 is same with Write completed with error (sct=0, sc=8) 00:20:56.992 the state(6) to be set 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 [2024-12-09 04:11:25.207338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2700d90 is same with starting I/O failed: -6 00:20:56.992 the state(6) to be set 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 [2024-12-09 04:11:25.207374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2700d90 is same with the state(6) to be set 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 [2024-12-09 04:11:25.207390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2700d90 is same with the state(6) to be set 00:20:56.992 [2024-12-09 04:11:25.207403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2700d90 is same with the state(6) to be set 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 [2024-12-09 04:11:25.207415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2700d90 is same with starting I/O failed: -6 00:20:56.992 the state(6) to be set 00:20:56.992 [2024-12-09 04:11:25.207430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2700d90 is same with the state(6) to be set 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 [2024-12-09 04:11:25.207443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2700d90 is same with the state(6) to be set 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 [2024-12-09 04:11:25.207456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2700d90 is same with the state(6) to be set 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 [2024-12-09 04:11:25.207856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 Write completed with error (sct=0, sc=8) 00:20:56.992 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 [2024-12-09 04:11:25.209062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 [2024-12-09 04:11:25.210777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:56.993 NVMe io qpair process completion error 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 [2024-12-09 04:11:25.212025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 Write completed with error (sct=0, sc=8) 00:20:56.993 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 [2024-12-09 04:11:25.213148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 [2024-12-09 04:11:25.214254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.994 Write completed with error (sct=0, sc=8) 00:20:56.994 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 [2024-12-09 04:11:25.216130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:56.995 NVMe io qpair process completion error 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 [2024-12-09 04:11:25.217345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 [2024-12-09 04:11:25.218415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.995 Write completed with error (sct=0, sc=8) 00:20:56.995 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 [2024-12-09 04:11:25.219600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 [2024-12-09 04:11:25.221523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:56.996 NVMe io qpair process completion error 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 starting I/O failed: -6 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.996 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 [2024-12-09 04:11:25.222837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 [2024-12-09 04:11:25.223968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 [2024-12-09 04:11:25.225118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.997 starting I/O failed: -6 00:20:56.997 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 [2024-12-09 04:11:25.227557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:56.998 NVMe io qpair process completion error 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 [2024-12-09 04:11:25.228864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 [2024-12-09 04:11:25.229944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 Write completed with error (sct=0, sc=8) 00:20:56.998 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 [2024-12-09 04:11:25.231137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 [2024-12-09 04:11:25.233859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:56.999 NVMe io qpair process completion error 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 starting I/O failed: -6 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:56.999 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 [2024-12-09 04:11:25.235294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 [2024-12-09 04:11:25.236450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 [2024-12-09 04:11:25.237600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.000 Write completed with error (sct=0, sc=8) 00:20:57.000 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 [2024-12-09 04:11:25.240178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:57.001 NVMe io qpair process completion error 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 [2024-12-09 04:11:25.241521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 [2024-12-09 04:11:25.242632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.001 Write completed with error (sct=0, sc=8) 00:20:57.001 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 [2024-12-09 04:11:25.243799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 [2024-12-09 04:11:25.246135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:57.002 NVMe io qpair process completion error 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 starting I/O failed: -6 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.002 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 [2024-12-09 04:11:25.247554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 [2024-12-09 04:11:25.248523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 [2024-12-09 04:11:25.249704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.003 Write completed with error (sct=0, sc=8) 00:20:57.003 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 [2024-12-09 04:11:25.251871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:57.004 NVMe io qpair process completion error 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 [2024-12-09 04:11:25.253192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 starting I/O failed: -6 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.004 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 [2024-12-09 04:11:25.254226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 [2024-12-09 04:11:25.255422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.005 Write completed with error (sct=0, sc=8) 00:20:57.005 starting I/O failed: -6 00:20:57.006 [2024-12-09 04:11:25.257755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:20:57.006 NVMe io qpair process completion error 00:20:57.006 Initializing NVMe Controllers 00:20:57.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:20:57.006 Controller IO queue size 128, less than required. 00:20:57.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:57.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:20:57.006 Controller IO queue size 128, less than required. 00:20:57.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:57.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:20:57.006 Controller IO queue size 128, less than required. 00:20:57.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:57.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:20:57.006 Controller IO queue size 128, less than required. 00:20:57.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:57.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:20:57.006 Controller IO queue size 128, less than required. 00:20:57.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:57.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:57.006 Controller IO queue size 128, less than required. 00:20:57.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:57.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:20:57.006 Controller IO queue size 128, less than required. 00:20:57.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:57.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:20:57.006 Controller IO queue size 128, less than required. 00:20:57.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:57.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:20:57.006 Controller IO queue size 128, less than required. 00:20:57.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:57.006 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:20:57.006 Controller IO queue size 128, less than required. 00:20:57.006 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:57.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:20:57.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:20:57.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:20:57.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:20:57.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:20:57.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:57.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:20:57.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:20:57.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:20:57.006 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:20:57.006 Initialization complete. Launching workers. 00:20:57.006 ======================================================== 00:20:57.006 Latency(us) 00:20:57.006 Device Information : IOPS MiB/s Average min max 00:20:57.006 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1812.10 77.86 70656.56 914.11 125497.87 00:20:57.006 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1828.41 78.56 70065.18 893.49 127074.16 00:20:57.006 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1841.12 79.11 69618.34 1059.90 122389.96 00:20:57.006 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1848.11 79.41 69382.98 941.82 133414.37 00:20:57.006 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1852.77 79.61 69235.45 1090.08 136172.17 00:20:57.006 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1844.72 79.27 68720.96 1012.25 118117.32 00:20:57.006 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1861.66 79.99 68118.11 1179.70 114314.38 00:20:57.006 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1813.59 77.93 69944.17 1033.30 116925.31 00:20:57.006 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1819.73 78.19 69729.30 883.73 117108.05 00:20:57.006 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1812.31 77.87 70040.49 1127.46 116425.35 00:20:57.006 ======================================================== 00:20:57.006 Total : 18334.51 787.81 69545.45 883.73 136172.17 00:20:57.006 00:20:57.006 [2024-12-09 04:11:25.263890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cae0 is same with the state(6) to be set 00:20:57.006 [2024-12-09 04:11:25.263986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144ad10 is same with the state(6) to be set 00:20:57.006 [2024-12-09 04:11:25.264045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bc50 is same with the state(6) to be set 00:20:57.006 [2024-12-09 04:11:25.264117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a6b0 is same with the state(6) to be set 00:20:57.006 [2024-12-09 04:11:25.264189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144a9e0 is same with the state(6) to be set 00:20:57.006 [2024-12-09 04:11:25.264246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144c720 is same with the state(6) to be set 00:20:57.006 [2024-12-09 04:11:25.264322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144c900 is same with the state(6) to be set 00:20:57.006 [2024-12-09 04:11:25.264386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144b2c0 is same with the state(6) to be set 00:20:57.006 [2024-12-09 04:11:25.264450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144b5f0 is same with the state(6) to be set 00:20:57.006 [2024-12-09 04:11:25.264508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144b920 is same with the state(6) to be set 00:20:57.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:20:57.264 04:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 280280 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 280280 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 280280 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:58.195 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:58.195 rmmod nvme_tcp 00:20:58.195 rmmod nvme_fabrics 00:20:58.454 rmmod nvme_keyring 00:20:58.454 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:58.454 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:20:58.454 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:20:58.454 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 280107 ']' 00:20:58.454 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 280107 00:20:58.454 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 280107 ']' 00:20:58.454 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 280107 00:20:58.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (280107) - No such process 00:20:58.454 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 280107 is not found' 00:20:58.454 Process with pid 280107 is not found 00:20:58.454 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:58.454 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:58.454 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:58.454 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:20:58.454 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:20:58.454 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:58.454 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:20:58.454 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:58.454 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:58.454 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:58.454 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:58.454 04:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.361 04:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:00.361 00:21:00.361 real 0m9.886s 00:21:00.361 user 0m24.136s 00:21:00.361 sys 0m5.543s 00:21:00.361 04:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.361 04:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:00.361 ************************************ 00:21:00.361 END TEST nvmf_shutdown_tc4 00:21:00.361 ************************************ 00:21:00.361 04:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:00.361 00:21:00.361 real 0m36.898s 00:21:00.361 user 1m39.336s 00:21:00.361 sys 0m11.792s 00:21:00.361 04:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:00.361 04:11:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:00.361 ************************************ 00:21:00.361 END TEST nvmf_shutdown 00:21:00.361 ************************************ 00:21:00.361 04:11:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:00.361 04:11:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:00.361 04:11:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:00.361 04:11:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:00.361 ************************************ 00:21:00.361 START TEST nvmf_nsid 00:21:00.361 ************************************ 00:21:00.361 04:11:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:00.620 * Looking for test storage... 00:21:00.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:00.620 04:11:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:00.620 04:11:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:21:00.621 04:11:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:00.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.621 --rc genhtml_branch_coverage=1 00:21:00.621 --rc genhtml_function_coverage=1 00:21:00.621 --rc genhtml_legend=1 00:21:00.621 --rc geninfo_all_blocks=1 00:21:00.621 --rc geninfo_unexecuted_blocks=1 00:21:00.621 00:21:00.621 ' 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:00.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.621 --rc genhtml_branch_coverage=1 00:21:00.621 --rc genhtml_function_coverage=1 00:21:00.621 --rc genhtml_legend=1 00:21:00.621 --rc geninfo_all_blocks=1 00:21:00.621 --rc geninfo_unexecuted_blocks=1 00:21:00.621 00:21:00.621 ' 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:00.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.621 --rc genhtml_branch_coverage=1 00:21:00.621 --rc genhtml_function_coverage=1 00:21:00.621 --rc genhtml_legend=1 00:21:00.621 --rc geninfo_all_blocks=1 00:21:00.621 --rc geninfo_unexecuted_blocks=1 00:21:00.621 00:21:00.621 ' 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:00.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:00.621 --rc genhtml_branch_coverage=1 00:21:00.621 --rc genhtml_function_coverage=1 00:21:00.621 --rc genhtml_legend=1 00:21:00.621 --rc geninfo_all_blocks=1 00:21:00.621 --rc geninfo_unexecuted_blocks=1 00:21:00.621 00:21:00.621 ' 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:00.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:00.621 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:00.622 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:00.622 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:00.622 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:00.622 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.622 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:00.622 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:00.622 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:00.622 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:00.622 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:21:00.622 04:11:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:03.156 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:03.156 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:03.156 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:03.156 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:03.156 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:03.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:03.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:21:03.157 00:21:03.157 --- 10.0.0.2 ping statistics --- 00:21:03.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.157 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:03.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:03.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:21:03.157 00:21:03.157 --- 10.0.0.1 ping statistics --- 00:21:03.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.157 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=282903 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 282903 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 282903 ']' 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:03.157 [2024-12-09 04:11:31.365152] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:21:03.157 [2024-12-09 04:11:31.365230] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.157 [2024-12-09 04:11:31.440090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.157 [2024-12-09 04:11:31.497280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:03.157 [2024-12-09 04:11:31.497349] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:03.157 [2024-12-09 04:11:31.497364] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:03.157 [2024-12-09 04:11:31.497375] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:03.157 [2024-12-09 04:11:31.497400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:03.157 [2024-12-09 04:11:31.497990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=283048 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=7b65b6aa-8a1e-4e75-a3bb-1fedb5c82920 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=cf3fb3af-eeda-4c24-b17a-cc785633e568 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=51c56393-6ec0-4504-ae50-a8db3fd6141a 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.157 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:03.157 null0 00:21:03.157 null1 00:21:03.157 null2 00:21:03.157 [2024-12-09 04:11:31.683365] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.157 [2024-12-09 04:11:31.698784] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:21:03.157 [2024-12-09 04:11:31.698852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid283048 ] 00:21:03.157 [2024-12-09 04:11:31.707633] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.419 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.419 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 283048 /var/tmp/tgt2.sock 00:21:03.419 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 283048 ']' 00:21:03.419 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:03.419 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:03.419 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:03.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:03.419 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:03.419 04:11:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:03.419 [2024-12-09 04:11:31.767619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.419 [2024-12-09 04:11:31.825891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:03.677 04:11:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.677 04:11:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:03.677 04:11:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:03.935 [2024-12-09 04:11:32.481583] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.935 [2024-12-09 04:11:32.497806] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:04.194 nvme0n1 nvme0n2 00:21:04.194 nvme1n1 00:21:04.194 04:11:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:04.194 04:11:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:04.194 04:11:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.759 04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:04.759 04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:04.759 04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:04.759 04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:04.759 04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:04.759 04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:04.759 04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:04.759 04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:04.759 04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:04.759 04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:04.760 04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:04.760 04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:04.760 04:11:33 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 7b65b6aa-8a1e-4e75-a3bb-1fedb5c82920 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=7b65b6aa8a1e4e75a3bb1fedb5c82920 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 7B65B6AA8A1E4E75A3BB1FEDB5C82920 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 7B65B6AA8A1E4E75A3BB1FEDB5C82920 == \7\B\6\5\B\6\A\A\8\A\1\E\4\E\7\5\A\3\B\B\1\F\E\D\B\5\C\8\2\9\2\0 ]] 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid cf3fb3af-eeda-4c24-b17a-cc785633e568 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=cf3fb3afeeda4c24b17acc785633e568 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo CF3FB3AFEEDA4C24B17ACC785633E568 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ CF3FB3AFEEDA4C24B17ACC785633E568 == \C\F\3\F\B\3\A\F\E\E\D\A\4\C\2\4\B\1\7\A\C\C\7\8\5\6\3\3\E\5\6\8 ]] 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 51c56393-6ec0-4504-ae50-a8db3fd6141a 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:05.692 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:05.949 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=51c563936ec04504ae50a8db3fd6141a 00:21:05.949 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 51C563936EC04504AE50A8DB3FD6141A 00:21:05.949 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 51C563936EC04504AE50A8DB3FD6141A == \5\1\C\5\6\3\9\3\6\E\C\0\4\5\0\4\A\E\5\0\A\8\D\B\3\F\D\6\1\4\1\A ]] 00:21:05.949 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:05.949 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:05.949 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:05.949 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 283048 00:21:05.949 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 283048 ']' 00:21:05.949 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 283048 00:21:05.949 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:05.949 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:05.949 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 283048 00:21:05.949 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:05.949 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:05.949 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 283048' 00:21:05.949 killing process with pid 283048 00:21:05.949 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 283048 00:21:05.949 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 283048 00:21:06.512 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:06.512 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:06.512 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:21:06.512 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:06.512 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:21:06.512 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:06.512 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:06.512 rmmod nvme_tcp 00:21:06.512 rmmod nvme_fabrics 00:21:06.512 rmmod nvme_keyring 00:21:06.512 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:06.512 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:21:06.512 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:21:06.512 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 282903 ']' 00:21:06.512 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 282903 00:21:06.512 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 282903 ']' 00:21:06.512 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 282903 00:21:06.512 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:06.512 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:06.512 04:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 282903 00:21:06.512 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:06.512 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:06.512 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 282903' 00:21:06.512 killing process with pid 282903 00:21:06.512 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 282903 00:21:06.512 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 282903 00:21:06.769 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:06.769 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:06.769 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:06.769 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:21:06.769 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:21:06.769 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:21:06.769 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:06.769 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:06.769 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:06.769 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:06.769 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:06.769 04:11:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.302 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:09.302 00:21:09.302 real 0m8.368s 00:21:09.302 user 0m8.267s 00:21:09.302 sys 0m2.629s 00:21:09.302 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:09.302 04:11:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:09.302 ************************************ 00:21:09.302 END TEST nvmf_nsid 00:21:09.302 ************************************ 00:21:09.302 04:11:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:09.302 00:21:09.302 real 11m47.763s 00:21:09.302 user 27m57.744s 00:21:09.302 sys 2m43.459s 00:21:09.302 04:11:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:09.302 04:11:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:09.302 ************************************ 00:21:09.302 END TEST nvmf_target_extra 00:21:09.302 ************************************ 00:21:09.302 04:11:37 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:09.302 04:11:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:09.302 04:11:37 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:09.302 04:11:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:09.302 ************************************ 00:21:09.302 START TEST nvmf_host 00:21:09.302 ************************************ 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:09.302 * Looking for test storage... 00:21:09.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:09.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.302 --rc genhtml_branch_coverage=1 00:21:09.302 --rc genhtml_function_coverage=1 00:21:09.302 --rc genhtml_legend=1 00:21:09.302 --rc geninfo_all_blocks=1 00:21:09.302 --rc geninfo_unexecuted_blocks=1 00:21:09.302 00:21:09.302 ' 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:09.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.302 --rc genhtml_branch_coverage=1 00:21:09.302 --rc genhtml_function_coverage=1 00:21:09.302 --rc genhtml_legend=1 00:21:09.302 --rc geninfo_all_blocks=1 00:21:09.302 --rc geninfo_unexecuted_blocks=1 00:21:09.302 00:21:09.302 ' 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:09.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.302 --rc genhtml_branch_coverage=1 00:21:09.302 --rc genhtml_function_coverage=1 00:21:09.302 --rc genhtml_legend=1 00:21:09.302 --rc geninfo_all_blocks=1 00:21:09.302 --rc geninfo_unexecuted_blocks=1 00:21:09.302 00:21:09.302 ' 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:09.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.302 --rc genhtml_branch_coverage=1 00:21:09.302 --rc genhtml_function_coverage=1 00:21:09.302 --rc genhtml_legend=1 00:21:09.302 --rc geninfo_all_blocks=1 00:21:09.302 --rc geninfo_unexecuted_blocks=1 00:21:09.302 00:21:09.302 ' 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.302 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:09.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.303 ************************************ 00:21:09.303 START TEST nvmf_multicontroller 00:21:09.303 ************************************ 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:09.303 * Looking for test storage... 00:21:09.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:09.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.303 --rc genhtml_branch_coverage=1 00:21:09.303 --rc genhtml_function_coverage=1 00:21:09.303 --rc genhtml_legend=1 00:21:09.303 --rc geninfo_all_blocks=1 00:21:09.303 --rc geninfo_unexecuted_blocks=1 00:21:09.303 00:21:09.303 ' 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:09.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.303 --rc genhtml_branch_coverage=1 00:21:09.303 --rc genhtml_function_coverage=1 00:21:09.303 --rc genhtml_legend=1 00:21:09.303 --rc geninfo_all_blocks=1 00:21:09.303 --rc geninfo_unexecuted_blocks=1 00:21:09.303 00:21:09.303 ' 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:09.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.303 --rc genhtml_branch_coverage=1 00:21:09.303 --rc genhtml_function_coverage=1 00:21:09.303 --rc genhtml_legend=1 00:21:09.303 --rc geninfo_all_blocks=1 00:21:09.303 --rc geninfo_unexecuted_blocks=1 00:21:09.303 00:21:09.303 ' 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:09.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:09.303 --rc genhtml_branch_coverage=1 00:21:09.303 --rc genhtml_function_coverage=1 00:21:09.303 --rc genhtml_legend=1 00:21:09.303 --rc geninfo_all_blocks=1 00:21:09.303 --rc geninfo_unexecuted_blocks=1 00:21:09.303 00:21:09.303 ' 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.303 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:09.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:21:09.304 04:11:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:11.202 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:11.203 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:11.203 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:11.203 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:11.203 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:11.462 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:11.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:11.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:21:11.462 00:21:11.462 --- 10.0.0.2 ping statistics --- 00:21:11.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.462 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:11.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:11.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:21:11.462 00:21:11.462 --- 10.0.0.1 ping statistics --- 00:21:11.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:11.462 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:11.462 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:11.463 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:11.463 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:11.463 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:11.463 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:11.463 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:11.463 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:11.463 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:11.463 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:11.463 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=285481 00:21:11.463 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 285481 00:21:11.463 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:11.463 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 285481 ']' 00:21:11.463 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:11.463 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.463 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:11.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:11.463 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.463 04:11:39 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:11.463 [2024-12-09 04:11:39.985364] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:21:11.463 [2024-12-09 04:11:39.985476] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.721 [2024-12-09 04:11:40.062109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:11.721 [2024-12-09 04:11:40.123484] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.721 [2024-12-09 04:11:40.123578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.721 [2024-12-09 04:11:40.123592] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.721 [2024-12-09 04:11:40.123603] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.721 [2024-12-09 04:11:40.123628] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.721 [2024-12-09 04:11:40.125293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.721 [2024-12-09 04:11:40.125323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:11.721 [2024-12-09 04:11:40.125327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.721 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:11.721 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:11.721 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:11.721 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:11.721 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:11.721 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:11.721 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:11.721 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.721 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:11.721 [2024-12-09 04:11:40.280808] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.721 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.721 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:11.721 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.721 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:11.980 Malloc0 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:11.980 [2024-12-09 04:11:40.343232] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:11.980 [2024-12-09 04:11:40.351051] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:11.980 Malloc1 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=285511 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 285511 /var/tmp/bdevperf.sock 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 285511 ']' 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:11.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:11.980 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.239 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.239 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:21:12.239 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:12.239 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.239 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.497 NVMe0n1 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.497 1 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.497 request: 00:21:12.497 { 00:21:12.497 "name": "NVMe0", 00:21:12.497 "trtype": "tcp", 00:21:12.497 "traddr": "10.0.0.2", 00:21:12.497 "adrfam": "ipv4", 00:21:12.497 "trsvcid": "4420", 00:21:12.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.497 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:12.497 "hostaddr": "10.0.0.1", 00:21:12.497 "prchk_reftag": false, 00:21:12.497 "prchk_guard": false, 00:21:12.497 "hdgst": false, 00:21:12.497 "ddgst": false, 00:21:12.497 "allow_unrecognized_csi": false, 00:21:12.497 "method": "bdev_nvme_attach_controller", 00:21:12.497 "req_id": 1 00:21:12.497 } 00:21:12.497 Got JSON-RPC error response 00:21:12.497 response: 00:21:12.497 { 00:21:12.497 "code": -114, 00:21:12.497 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:12.497 } 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.497 request: 00:21:12.497 { 00:21:12.497 "name": "NVMe0", 00:21:12.497 "trtype": "tcp", 00:21:12.497 "traddr": "10.0.0.2", 00:21:12.497 "adrfam": "ipv4", 00:21:12.497 "trsvcid": "4420", 00:21:12.497 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:12.497 "hostaddr": "10.0.0.1", 00:21:12.497 "prchk_reftag": false, 00:21:12.497 "prchk_guard": false, 00:21:12.497 "hdgst": false, 00:21:12.497 "ddgst": false, 00:21:12.497 "allow_unrecognized_csi": false, 00:21:12.497 "method": "bdev_nvme_attach_controller", 00:21:12.497 "req_id": 1 00:21:12.497 } 00:21:12.497 Got JSON-RPC error response 00:21:12.497 response: 00:21:12.497 { 00:21:12.497 "code": -114, 00:21:12.497 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:12.497 } 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:12.497 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.498 request: 00:21:12.498 { 00:21:12.498 "name": "NVMe0", 00:21:12.498 "trtype": "tcp", 00:21:12.498 "traddr": "10.0.0.2", 00:21:12.498 "adrfam": "ipv4", 00:21:12.498 "trsvcid": "4420", 00:21:12.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.498 "hostaddr": "10.0.0.1", 00:21:12.498 "prchk_reftag": false, 00:21:12.498 "prchk_guard": false, 00:21:12.498 "hdgst": false, 00:21:12.498 "ddgst": false, 00:21:12.498 "multipath": "disable", 00:21:12.498 "allow_unrecognized_csi": false, 00:21:12.498 "method": "bdev_nvme_attach_controller", 00:21:12.498 "req_id": 1 00:21:12.498 } 00:21:12.498 Got JSON-RPC error response 00:21:12.498 response: 00:21:12.498 { 00:21:12.498 "code": -114, 00:21:12.498 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:21:12.498 } 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.498 request: 00:21:12.498 { 00:21:12.498 "name": "NVMe0", 00:21:12.498 "trtype": "tcp", 00:21:12.498 "traddr": "10.0.0.2", 00:21:12.498 "adrfam": "ipv4", 00:21:12.498 "trsvcid": "4420", 00:21:12.498 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:12.498 "hostaddr": "10.0.0.1", 00:21:12.498 "prchk_reftag": false, 00:21:12.498 "prchk_guard": false, 00:21:12.498 "hdgst": false, 00:21:12.498 "ddgst": false, 00:21:12.498 "multipath": "failover", 00:21:12.498 "allow_unrecognized_csi": false, 00:21:12.498 "method": "bdev_nvme_attach_controller", 00:21:12.498 "req_id": 1 00:21:12.498 } 00:21:12.498 Got JSON-RPC error response 00:21:12.498 response: 00:21:12.498 { 00:21:12.498 "code": -114, 00:21:12.498 "message": "A controller named NVMe0 already exists with the specified network path" 00:21:12.498 } 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.498 04:11:40 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.756 NVMe0n1 00:21:12.756 04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.756 04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:12.756 04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.756 04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:12.756 04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.756 04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:21:12.756 04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.756 04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.014 00:21:13.014 04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.014 04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:13.014 04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:13.014 04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.014 04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:13.014 04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.014 04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:13.014 04:11:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:13.949 { 00:21:13.949 "results": [ 00:21:13.949 { 00:21:13.949 "job": "NVMe0n1", 00:21:13.949 "core_mask": "0x1", 00:21:13.949 "workload": "write", 00:21:13.949 "status": "finished", 00:21:13.949 "queue_depth": 128, 00:21:13.949 "io_size": 4096, 00:21:13.949 "runtime": 1.005762, 00:21:13.949 "iops": 18249.84439658687, 00:21:13.949 "mibps": 71.28845467416745, 00:21:13.949 "io_failed": 0, 00:21:13.949 "io_timeout": 0, 00:21:13.949 "avg_latency_us": 7002.143096986389, 00:21:13.949 "min_latency_us": 4514.702222222222, 00:21:13.949 "max_latency_us": 14854.826666666666 00:21:13.949 } 00:21:13.949 ], 00:21:13.949 "core_count": 1 00:21:13.949 } 00:21:14.207 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:14.207 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.207 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:14.207 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.207 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:21:14.207 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 285511 00:21:14.207 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 285511 ']' 00:21:14.207 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 285511 00:21:14.207 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:14.207 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.207 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 285511 00:21:14.207 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:14.207 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:14.207 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 285511' 00:21:14.207 killing process with pid 285511 00:21:14.207 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 285511 00:21:14.207 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 285511 00:21:14.465 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:21:14.466 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:14.466 [2024-12-09 04:11:40.456103] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:21:14.466 [2024-12-09 04:11:40.456192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid285511 ] 00:21:14.466 [2024-12-09 04:11:40.528170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.466 [2024-12-09 04:11:40.589917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.466 [2024-12-09 04:11:41.384381] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name 9b04d6a0-267e-41a8-bced-539ae2b96927 already exists 00:21:14.466 [2024-12-09 04:11:41.384423] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:9b04d6a0-267e-41a8-bced-539ae2b96927 alias for bdev NVMe1n1 00:21:14.466 [2024-12-09 04:11:41.384438] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:14.466 Running I/O for 1 seconds... 00:21:14.466 18227.00 IOPS, 71.20 MiB/s 00:21:14.466 Latency(us) 00:21:14.466 [2024-12-09T03:11:43.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.466 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:14.466 NVMe0n1 : 1.01 18249.84 71.29 0.00 0.00 7002.14 4514.70 14854.83 00:21:14.466 [2024-12-09T03:11:43.042Z] =================================================================================================================== 00:21:14.466 [2024-12-09T03:11:43.042Z] Total : 18249.84 71.29 0.00 0.00 7002.14 4514.70 14854.83 00:21:14.466 Received shutdown signal, test time was about 1.000000 seconds 00:21:14.466 00:21:14.466 Latency(us) 00:21:14.466 [2024-12-09T03:11:43.042Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.466 [2024-12-09T03:11:43.042Z] =================================================================================================================== 00:21:14.466 [2024-12-09T03:11:43.042Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:14.466 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:14.466 rmmod nvme_tcp 00:21:14.466 rmmod nvme_fabrics 00:21:14.466 rmmod nvme_keyring 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 285481 ']' 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 285481 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 285481 ']' 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 285481 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 285481 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 285481' 00:21:14.466 killing process with pid 285481 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 285481 00:21:14.466 04:11:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 285481 00:21:14.725 04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:14.725 04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:14.725 04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:14.725 04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:21:14.725 04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:21:14.725 04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:14.725 04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:21:14.725 04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:14.725 04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:14.725 04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.725 04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.725 04:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:17.257 00:21:17.257 real 0m7.715s 00:21:17.257 user 0m12.415s 00:21:17.257 sys 0m2.406s 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:17.257 ************************************ 00:21:17.257 END TEST nvmf_multicontroller 00:21:17.257 ************************************ 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:17.257 ************************************ 00:21:17.257 START TEST nvmf_aer 00:21:17.257 ************************************ 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:17.257 * Looking for test storage... 00:21:17.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:17.257 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:17.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.258 --rc genhtml_branch_coverage=1 00:21:17.258 --rc genhtml_function_coverage=1 00:21:17.258 --rc genhtml_legend=1 00:21:17.258 --rc geninfo_all_blocks=1 00:21:17.258 --rc geninfo_unexecuted_blocks=1 00:21:17.258 00:21:17.258 ' 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:17.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.258 --rc genhtml_branch_coverage=1 00:21:17.258 --rc genhtml_function_coverage=1 00:21:17.258 --rc genhtml_legend=1 00:21:17.258 --rc geninfo_all_blocks=1 00:21:17.258 --rc geninfo_unexecuted_blocks=1 00:21:17.258 00:21:17.258 ' 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:17.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.258 --rc genhtml_branch_coverage=1 00:21:17.258 --rc genhtml_function_coverage=1 00:21:17.258 --rc genhtml_legend=1 00:21:17.258 --rc geninfo_all_blocks=1 00:21:17.258 --rc geninfo_unexecuted_blocks=1 00:21:17.258 00:21:17.258 ' 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:17.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:17.258 --rc genhtml_branch_coverage=1 00:21:17.258 --rc genhtml_function_coverage=1 00:21:17.258 --rc genhtml_legend=1 00:21:17.258 --rc geninfo_all_blocks=1 00:21:17.258 --rc geninfo_unexecuted_blocks=1 00:21:17.258 00:21:17.258 ' 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:17.258 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:21:17.258 04:11:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:19.162 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:19.162 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:21:19.162 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:19.162 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:19.162 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:19.162 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:19.162 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:19.162 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:21:19.162 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:19.162 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:21:19.162 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:19.163 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:19.163 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:19.163 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:19.163 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:19.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:21:19.163 00:21:19.163 --- 10.0.0.2 ping statistics --- 00:21:19.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.163 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:21:19.163 00:21:19.163 --- 10.0.0.1 ping statistics --- 00:21:19.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.163 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=287851 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 287851 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 287851 ']' 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.163 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.164 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.164 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.164 04:11:47 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:19.420 [2024-12-09 04:11:47.775884] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:21:19.421 [2024-12-09 04:11:47.775973] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.421 [2024-12-09 04:11:47.847142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:19.421 [2024-12-09 04:11:47.904942] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.421 [2024-12-09 04:11:47.904994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.421 [2024-12-09 04:11:47.905022] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.421 [2024-12-09 04:11:47.905033] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.421 [2024-12-09 04:11:47.905043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.421 [2024-12-09 04:11:47.906687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.421 [2024-12-09 04:11:47.906752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.421 [2024-12-09 04:11:47.906816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:19.421 [2024-12-09 04:11:47.906819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:19.678 [2024-12-09 04:11:48.058213] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:19.678 Malloc0 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:19.678 [2024-12-09 04:11:48.119887] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:19.678 [ 00:21:19.678 { 00:21:19.678 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:19.678 "subtype": "Discovery", 00:21:19.678 "listen_addresses": [], 00:21:19.678 "allow_any_host": true, 00:21:19.678 "hosts": [] 00:21:19.678 }, 00:21:19.678 { 00:21:19.678 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.678 "subtype": "NVMe", 00:21:19.678 "listen_addresses": [ 00:21:19.678 { 00:21:19.678 "trtype": "TCP", 00:21:19.678 "adrfam": "IPv4", 00:21:19.678 "traddr": "10.0.0.2", 00:21:19.678 "trsvcid": "4420" 00:21:19.678 } 00:21:19.678 ], 00:21:19.678 "allow_any_host": true, 00:21:19.678 "hosts": [], 00:21:19.678 "serial_number": "SPDK00000000000001", 00:21:19.678 "model_number": "SPDK bdev Controller", 00:21:19.678 "max_namespaces": 2, 00:21:19.678 "min_cntlid": 1, 00:21:19.678 "max_cntlid": 65519, 00:21:19.678 "namespaces": [ 00:21:19.678 { 00:21:19.678 "nsid": 1, 00:21:19.678 "bdev_name": "Malloc0", 00:21:19.678 "name": "Malloc0", 00:21:19.678 "nguid": "C5065B8CEC3C44EF833B44B01434F6A8", 00:21:19.678 "uuid": "c5065b8c-ec3c-44ef-833b-44b01434f6a8" 00:21:19.678 } 00:21:19.678 ] 00:21:19.678 } 00:21:19.678 ] 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=287877 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:19.678 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:19.679 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:19.679 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:19.679 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:19.679 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:19.679 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:19.679 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:19.679 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:19.679 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:19.935 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:19.935 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:19.935 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:19.935 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:19.935 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.935 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:19.935 Malloc1 00:21:19.935 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.935 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:19.935 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.935 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:19.935 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.935 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:19.935 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.935 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:19.935 Asynchronous Event Request test 00:21:19.935 Attaching to 10.0.0.2 00:21:19.935 Attached to 10.0.0.2 00:21:19.935 Registering asynchronous event callbacks... 00:21:19.935 Starting namespace attribute notice tests for all controllers... 00:21:19.935 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:19.935 aer_cb - Changed Namespace 00:21:19.935 Cleaning up... 00:21:19.935 [ 00:21:19.935 { 00:21:19.935 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:19.936 "subtype": "Discovery", 00:21:19.936 "listen_addresses": [], 00:21:19.936 "allow_any_host": true, 00:21:19.936 "hosts": [] 00:21:19.936 }, 00:21:19.936 { 00:21:19.936 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.936 "subtype": "NVMe", 00:21:19.936 "listen_addresses": [ 00:21:19.936 { 00:21:19.936 "trtype": "TCP", 00:21:19.936 "adrfam": "IPv4", 00:21:19.936 "traddr": "10.0.0.2", 00:21:19.936 "trsvcid": "4420" 00:21:19.936 } 00:21:19.936 ], 00:21:19.936 "allow_any_host": true, 00:21:19.936 "hosts": [], 00:21:19.936 "serial_number": "SPDK00000000000001", 00:21:19.936 "model_number": "SPDK bdev Controller", 00:21:19.936 "max_namespaces": 2, 00:21:19.936 "min_cntlid": 1, 00:21:19.936 "max_cntlid": 65519, 00:21:19.936 "namespaces": [ 00:21:19.936 { 00:21:19.936 "nsid": 1, 00:21:19.936 "bdev_name": "Malloc0", 00:21:19.936 "name": "Malloc0", 00:21:19.936 "nguid": "C5065B8CEC3C44EF833B44B01434F6A8", 00:21:19.936 "uuid": "c5065b8c-ec3c-44ef-833b-44b01434f6a8" 00:21:19.936 }, 00:21:19.936 { 00:21:19.936 "nsid": 2, 00:21:19.936 "bdev_name": "Malloc1", 00:21:19.936 "name": "Malloc1", 00:21:19.936 "nguid": "042A60A85DF245F8A39D93DFE4664FAF", 00:21:19.936 "uuid": "042a60a8-5df2-45f8-a39d-93dfe4664faf" 00:21:19.936 } 00:21:19.936 ] 00:21:19.936 } 00:21:19.936 ] 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 287877 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:19.936 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:19.936 rmmod nvme_tcp 00:21:20.193 rmmod nvme_fabrics 00:21:20.193 rmmod nvme_keyring 00:21:20.193 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:20.193 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:21:20.193 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:21:20.193 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 287851 ']' 00:21:20.193 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 287851 00:21:20.193 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 287851 ']' 00:21:20.193 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 287851 00:21:20.193 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:20.193 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:20.193 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 287851 00:21:20.193 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:20.193 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:20.193 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 287851' 00:21:20.193 killing process with pid 287851 00:21:20.193 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 287851 00:21:20.193 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 287851 00:21:20.451 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:20.451 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:20.451 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:20.451 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:21:20.451 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:21:20.451 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:20.451 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:21:20.451 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:20.451 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:20.451 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.451 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:20.451 04:11:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.358 04:11:50 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:22.358 00:21:22.358 real 0m5.566s 00:21:22.358 user 0m4.415s 00:21:22.358 sys 0m2.027s 00:21:22.358 04:11:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:22.358 04:11:50 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:22.358 ************************************ 00:21:22.358 END TEST nvmf_aer 00:21:22.358 ************************************ 00:21:22.358 04:11:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:22.358 04:11:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:22.358 04:11:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:22.358 04:11:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:22.358 ************************************ 00:21:22.358 START TEST nvmf_async_init 00:21:22.358 ************************************ 00:21:22.358 04:11:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:22.617 * Looking for test storage... 00:21:22.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:22.617 04:11:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:22.617 04:11:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:21:22.617 04:11:50 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:22.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.617 --rc genhtml_branch_coverage=1 00:21:22.617 --rc genhtml_function_coverage=1 00:21:22.617 --rc genhtml_legend=1 00:21:22.617 --rc geninfo_all_blocks=1 00:21:22.617 --rc geninfo_unexecuted_blocks=1 00:21:22.617 00:21:22.617 ' 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:22.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.617 --rc genhtml_branch_coverage=1 00:21:22.617 --rc genhtml_function_coverage=1 00:21:22.617 --rc genhtml_legend=1 00:21:22.617 --rc geninfo_all_blocks=1 00:21:22.617 --rc geninfo_unexecuted_blocks=1 00:21:22.617 00:21:22.617 ' 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:22.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.617 --rc genhtml_branch_coverage=1 00:21:22.617 --rc genhtml_function_coverage=1 00:21:22.617 --rc genhtml_legend=1 00:21:22.617 --rc geninfo_all_blocks=1 00:21:22.617 --rc geninfo_unexecuted_blocks=1 00:21:22.617 00:21:22.617 ' 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:22.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.617 --rc genhtml_branch_coverage=1 00:21:22.617 --rc genhtml_function_coverage=1 00:21:22.617 --rc genhtml_legend=1 00:21:22.617 --rc geninfo_all_blocks=1 00:21:22.617 --rc geninfo_unexecuted_blocks=1 00:21:22.617 00:21:22.617 ' 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.617 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:22.618 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=6be58f52ce904eaa8c6af269ad0ae0e8 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:21:22.618 04:11:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.143 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:25.143 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:21:25.143 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:25.143 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:25.143 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:25.143 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:25.143 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:25.143 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:21:25.143 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:25.143 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:21:25.143 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:21:25.143 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:21:25.143 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:25.144 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:25.144 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:25.144 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:25.144 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:25.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:25.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:21:25.144 00:21:25.144 --- 10.0.0.2 ping statistics --- 00:21:25.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.144 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:25.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:25.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:21:25.144 00:21:25.144 --- 10.0.0.1 ping statistics --- 00:21:25.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.144 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=289940 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 289940 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 289940 ']' 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.144 [2024-12-09 04:11:53.442357] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:21:25.144 [2024-12-09 04:11:53.442438] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.144 [2024-12-09 04:11:53.513939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.144 [2024-12-09 04:11:53.566472] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.144 [2024-12-09 04:11:53.566535] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.144 [2024-12-09 04:11:53.566563] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.144 [2024-12-09 04:11:53.566573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.144 [2024-12-09 04:11:53.566583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.144 [2024-12-09 04:11:53.567147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:25.144 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:25.145 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.145 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.145 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:25.145 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.145 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.145 [2024-12-09 04:11:53.706697] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.145 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.145 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:25.145 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.145 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.145 null0 00:21:25.402 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.402 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:25.402 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.402 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.402 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.402 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:25.402 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.402 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.402 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.402 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6be58f52ce904eaa8c6af269ad0ae0e8 00:21:25.402 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.402 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.402 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.402 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:25.402 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.402 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.402 [2024-12-09 04:11:53.747001] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.402 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.402 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:25.402 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.402 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.659 nvme0n1 00:21:25.659 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.659 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:25.659 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.659 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.659 [ 00:21:25.659 { 00:21:25.659 "name": "nvme0n1", 00:21:25.659 "aliases": [ 00:21:25.659 "6be58f52-ce90-4eaa-8c6a-f269ad0ae0e8" 00:21:25.659 ], 00:21:25.659 "product_name": "NVMe disk", 00:21:25.659 "block_size": 512, 00:21:25.659 "num_blocks": 2097152, 00:21:25.659 "uuid": "6be58f52-ce90-4eaa-8c6a-f269ad0ae0e8", 00:21:25.659 "numa_id": 0, 00:21:25.659 "assigned_rate_limits": { 00:21:25.659 "rw_ios_per_sec": 0, 00:21:25.659 "rw_mbytes_per_sec": 0, 00:21:25.659 "r_mbytes_per_sec": 0, 00:21:25.659 "w_mbytes_per_sec": 0 00:21:25.659 }, 00:21:25.659 "claimed": false, 00:21:25.659 "zoned": false, 00:21:25.659 "supported_io_types": { 00:21:25.659 "read": true, 00:21:25.659 "write": true, 00:21:25.659 "unmap": false, 00:21:25.659 "flush": true, 00:21:25.659 "reset": true, 00:21:25.659 "nvme_admin": true, 00:21:25.659 "nvme_io": true, 00:21:25.659 "nvme_io_md": false, 00:21:25.659 "write_zeroes": true, 00:21:25.659 "zcopy": false, 00:21:25.659 "get_zone_info": false, 00:21:25.659 "zone_management": false, 00:21:25.659 "zone_append": false, 00:21:25.659 "compare": true, 00:21:25.659 "compare_and_write": true, 00:21:25.659 "abort": true, 00:21:25.659 "seek_hole": false, 00:21:25.659 "seek_data": false, 00:21:25.659 "copy": true, 00:21:25.659 "nvme_iov_md": false 00:21:25.659 }, 00:21:25.659 "memory_domains": [ 00:21:25.659 { 00:21:25.659 "dma_device_id": "system", 00:21:25.659 "dma_device_type": 1 00:21:25.659 } 00:21:25.659 ], 00:21:25.659 "driver_specific": { 00:21:25.659 "nvme": [ 00:21:25.659 { 00:21:25.659 "trid": { 00:21:25.659 "trtype": "TCP", 00:21:25.659 "adrfam": "IPv4", 00:21:25.659 "traddr": "10.0.0.2", 00:21:25.659 "trsvcid": "4420", 00:21:25.659 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:25.659 }, 00:21:25.659 "ctrlr_data": { 00:21:25.659 "cntlid": 1, 00:21:25.659 "vendor_id": "0x8086", 00:21:25.659 "model_number": "SPDK bdev Controller", 00:21:25.659 "serial_number": "00000000000000000000", 00:21:25.659 "firmware_revision": "25.01", 00:21:25.659 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:25.659 "oacs": { 00:21:25.659 "security": 0, 00:21:25.659 "format": 0, 00:21:25.659 "firmware": 0, 00:21:25.659 "ns_manage": 0 00:21:25.659 }, 00:21:25.659 "multi_ctrlr": true, 00:21:25.659 "ana_reporting": false 00:21:25.659 }, 00:21:25.659 "vs": { 00:21:25.659 "nvme_version": "1.3" 00:21:25.659 }, 00:21:25.659 "ns_data": { 00:21:25.659 "id": 1, 00:21:25.659 "can_share": true 00:21:25.659 } 00:21:25.659 } 00:21:25.659 ], 00:21:25.659 "mp_policy": "active_passive" 00:21:25.659 } 00:21:25.659 } 00:21:25.659 ] 00:21:25.659 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.659 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:25.659 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.659 04:11:53 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.659 [2024-12-09 04:11:53.996161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:25.659 [2024-12-09 04:11:53.996249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25f9740 (9): Bad file descriptor 00:21:25.659 [2024-12-09 04:11:54.128393] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:25.659 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.659 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:25.659 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.659 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.659 [ 00:21:25.659 { 00:21:25.659 "name": "nvme0n1", 00:21:25.659 "aliases": [ 00:21:25.659 "6be58f52-ce90-4eaa-8c6a-f269ad0ae0e8" 00:21:25.659 ], 00:21:25.659 "product_name": "NVMe disk", 00:21:25.659 "block_size": 512, 00:21:25.659 "num_blocks": 2097152, 00:21:25.659 "uuid": "6be58f52-ce90-4eaa-8c6a-f269ad0ae0e8", 00:21:25.659 "numa_id": 0, 00:21:25.659 "assigned_rate_limits": { 00:21:25.659 "rw_ios_per_sec": 0, 00:21:25.659 "rw_mbytes_per_sec": 0, 00:21:25.659 "r_mbytes_per_sec": 0, 00:21:25.659 "w_mbytes_per_sec": 0 00:21:25.659 }, 00:21:25.659 "claimed": false, 00:21:25.659 "zoned": false, 00:21:25.659 "supported_io_types": { 00:21:25.659 "read": true, 00:21:25.659 "write": true, 00:21:25.659 "unmap": false, 00:21:25.659 "flush": true, 00:21:25.659 "reset": true, 00:21:25.659 "nvme_admin": true, 00:21:25.659 "nvme_io": true, 00:21:25.659 "nvme_io_md": false, 00:21:25.659 "write_zeroes": true, 00:21:25.659 "zcopy": false, 00:21:25.659 "get_zone_info": false, 00:21:25.659 "zone_management": false, 00:21:25.659 "zone_append": false, 00:21:25.659 "compare": true, 00:21:25.659 "compare_and_write": true, 00:21:25.659 "abort": true, 00:21:25.659 "seek_hole": false, 00:21:25.659 "seek_data": false, 00:21:25.659 "copy": true, 00:21:25.659 "nvme_iov_md": false 00:21:25.659 }, 00:21:25.659 "memory_domains": [ 00:21:25.659 { 00:21:25.659 "dma_device_id": "system", 00:21:25.659 "dma_device_type": 1 00:21:25.659 } 00:21:25.659 ], 00:21:25.659 "driver_specific": { 00:21:25.659 "nvme": [ 00:21:25.659 { 00:21:25.659 "trid": { 00:21:25.659 "trtype": "TCP", 00:21:25.659 "adrfam": "IPv4", 00:21:25.659 "traddr": "10.0.0.2", 00:21:25.659 "trsvcid": "4420", 00:21:25.659 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:25.659 }, 00:21:25.659 "ctrlr_data": { 00:21:25.659 "cntlid": 2, 00:21:25.659 "vendor_id": "0x8086", 00:21:25.659 "model_number": "SPDK bdev Controller", 00:21:25.659 "serial_number": "00000000000000000000", 00:21:25.659 "firmware_revision": "25.01", 00:21:25.659 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:25.659 "oacs": { 00:21:25.659 "security": 0, 00:21:25.659 "format": 0, 00:21:25.659 "firmware": 0, 00:21:25.659 "ns_manage": 0 00:21:25.659 }, 00:21:25.659 "multi_ctrlr": true, 00:21:25.659 "ana_reporting": false 00:21:25.659 }, 00:21:25.659 "vs": { 00:21:25.659 "nvme_version": "1.3" 00:21:25.659 }, 00:21:25.659 "ns_data": { 00:21:25.659 "id": 1, 00:21:25.659 "can_share": true 00:21:25.659 } 00:21:25.659 } 00:21:25.659 ], 00:21:25.659 "mp_policy": "active_passive" 00:21:25.659 } 00:21:25.659 } 00:21:25.659 ] 00:21:25.659 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.659 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:25.659 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.659 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.659 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.659 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.yvb6E3Wj1y 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.yvb6E3Wj1y 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.yvb6E3Wj1y 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.660 [2024-12-09 04:11:54.180756] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:25.660 [2024-12-09 04:11:54.180917] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.660 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.660 [2024-12-09 04:11:54.196797] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:25.917 nvme0n1 00:21:25.917 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.917 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:25.917 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.917 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.917 [ 00:21:25.917 { 00:21:25.917 "name": "nvme0n1", 00:21:25.917 "aliases": [ 00:21:25.917 "6be58f52-ce90-4eaa-8c6a-f269ad0ae0e8" 00:21:25.917 ], 00:21:25.917 "product_name": "NVMe disk", 00:21:25.917 "block_size": 512, 00:21:25.917 "num_blocks": 2097152, 00:21:25.917 "uuid": "6be58f52-ce90-4eaa-8c6a-f269ad0ae0e8", 00:21:25.917 "numa_id": 0, 00:21:25.917 "assigned_rate_limits": { 00:21:25.917 "rw_ios_per_sec": 0, 00:21:25.917 "rw_mbytes_per_sec": 0, 00:21:25.917 "r_mbytes_per_sec": 0, 00:21:25.917 "w_mbytes_per_sec": 0 00:21:25.917 }, 00:21:25.917 "claimed": false, 00:21:25.917 "zoned": false, 00:21:25.917 "supported_io_types": { 00:21:25.917 "read": true, 00:21:25.917 "write": true, 00:21:25.917 "unmap": false, 00:21:25.917 "flush": true, 00:21:25.917 "reset": true, 00:21:25.917 "nvme_admin": true, 00:21:25.917 "nvme_io": true, 00:21:25.917 "nvme_io_md": false, 00:21:25.917 "write_zeroes": true, 00:21:25.917 "zcopy": false, 00:21:25.917 "get_zone_info": false, 00:21:25.917 "zone_management": false, 00:21:25.917 "zone_append": false, 00:21:25.917 "compare": true, 00:21:25.917 "compare_and_write": true, 00:21:25.917 "abort": true, 00:21:25.917 "seek_hole": false, 00:21:25.917 "seek_data": false, 00:21:25.917 "copy": true, 00:21:25.917 "nvme_iov_md": false 00:21:25.917 }, 00:21:25.917 "memory_domains": [ 00:21:25.917 { 00:21:25.917 "dma_device_id": "system", 00:21:25.917 "dma_device_type": 1 00:21:25.917 } 00:21:25.917 ], 00:21:25.917 "driver_specific": { 00:21:25.917 "nvme": [ 00:21:25.917 { 00:21:25.917 "trid": { 00:21:25.917 "trtype": "TCP", 00:21:25.917 "adrfam": "IPv4", 00:21:25.917 "traddr": "10.0.0.2", 00:21:25.917 "trsvcid": "4421", 00:21:25.917 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:25.917 }, 00:21:25.917 "ctrlr_data": { 00:21:25.917 "cntlid": 3, 00:21:25.917 "vendor_id": "0x8086", 00:21:25.917 "model_number": "SPDK bdev Controller", 00:21:25.917 "serial_number": "00000000000000000000", 00:21:25.917 "firmware_revision": "25.01", 00:21:25.917 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:25.917 "oacs": { 00:21:25.917 "security": 0, 00:21:25.917 "format": 0, 00:21:25.917 "firmware": 0, 00:21:25.917 "ns_manage": 0 00:21:25.917 }, 00:21:25.918 "multi_ctrlr": true, 00:21:25.918 "ana_reporting": false 00:21:25.918 }, 00:21:25.918 "vs": { 00:21:25.918 "nvme_version": "1.3" 00:21:25.918 }, 00:21:25.918 "ns_data": { 00:21:25.918 "id": 1, 00:21:25.918 "can_share": true 00:21:25.918 } 00:21:25.918 } 00:21:25.918 ], 00:21:25.918 "mp_policy": "active_passive" 00:21:25.918 } 00:21:25.918 } 00:21:25.918 ] 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.yvb6E3Wj1y 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:25.918 rmmod nvme_tcp 00:21:25.918 rmmod nvme_fabrics 00:21:25.918 rmmod nvme_keyring 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 289940 ']' 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 289940 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 289940 ']' 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 289940 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 289940 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 289940' 00:21:25.918 killing process with pid 289940 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 289940 00:21:25.918 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 289940 00:21:26.176 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:26.177 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:26.177 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:26.177 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:21:26.177 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:21:26.177 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:26.177 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:21:26.177 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:26.177 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:26.177 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:26.177 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:26.177 04:11:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.078 04:11:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:28.078 00:21:28.078 real 0m5.702s 00:21:28.078 user 0m2.153s 00:21:28.078 sys 0m1.967s 00:21:28.078 04:11:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:28.078 04:11:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:28.078 ************************************ 00:21:28.078 END TEST nvmf_async_init 00:21:28.078 ************************************ 00:21:28.078 04:11:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:28.078 04:11:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:28.078 04:11:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:28.078 04:11:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.336 ************************************ 00:21:28.336 START TEST dma 00:21:28.336 ************************************ 00:21:28.336 04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:28.336 * Looking for test storage... 00:21:28.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:28.336 04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:28.336 04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:21:28.336 04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:28.336 04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:28.336 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:28.336 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:28.336 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:28.336 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:21:28.336 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:21:28.336 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:21:28.336 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:21:28.336 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:21:28.336 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:21:28.336 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:28.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.337 --rc genhtml_branch_coverage=1 00:21:28.337 --rc genhtml_function_coverage=1 00:21:28.337 --rc genhtml_legend=1 00:21:28.337 --rc geninfo_all_blocks=1 00:21:28.337 --rc geninfo_unexecuted_blocks=1 00:21:28.337 00:21:28.337 ' 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:28.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.337 --rc genhtml_branch_coverage=1 00:21:28.337 --rc genhtml_function_coverage=1 00:21:28.337 --rc genhtml_legend=1 00:21:28.337 --rc geninfo_all_blocks=1 00:21:28.337 --rc geninfo_unexecuted_blocks=1 00:21:28.337 00:21:28.337 ' 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:28.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.337 --rc genhtml_branch_coverage=1 00:21:28.337 --rc genhtml_function_coverage=1 00:21:28.337 --rc genhtml_legend=1 00:21:28.337 --rc geninfo_all_blocks=1 00:21:28.337 --rc geninfo_unexecuted_blocks=1 00:21:28.337 00:21:28.337 ' 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:28.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.337 --rc genhtml_branch_coverage=1 00:21:28.337 --rc genhtml_function_coverage=1 00:21:28.337 --rc genhtml_legend=1 00:21:28.337 --rc geninfo_all_blocks=1 00:21:28.337 --rc geninfo_unexecuted_blocks=1 00:21:28.337 00:21:28.337 ' 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.337 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:28.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:28.338 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:28.338 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:28.338 04:11:56 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:28.338 04:11:56 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:28.338 04:11:56 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:21:28.338 00:21:28.338 real 0m0.170s 00:21:28.338 user 0m0.112s 00:21:28.338 sys 0m0.067s 00:21:28.338 04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:28.338 04:11:56 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:21:28.338 ************************************ 00:21:28.338 END TEST dma 00:21:28.338 ************************************ 00:21:28.338 04:11:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:28.338 04:11:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:28.338 04:11:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:28.338 04:11:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:28.338 ************************************ 00:21:28.338 START TEST nvmf_identify 00:21:28.338 ************************************ 00:21:28.338 04:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:28.596 * Looking for test storage... 00:21:28.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:28.596 04:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:28.596 04:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:21:28.596 04:11:56 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:28.596 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:28.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.597 --rc genhtml_branch_coverage=1 00:21:28.597 --rc genhtml_function_coverage=1 00:21:28.597 --rc genhtml_legend=1 00:21:28.597 --rc geninfo_all_blocks=1 00:21:28.597 --rc geninfo_unexecuted_blocks=1 00:21:28.597 00:21:28.597 ' 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:28.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.597 --rc genhtml_branch_coverage=1 00:21:28.597 --rc genhtml_function_coverage=1 00:21:28.597 --rc genhtml_legend=1 00:21:28.597 --rc geninfo_all_blocks=1 00:21:28.597 --rc geninfo_unexecuted_blocks=1 00:21:28.597 00:21:28.597 ' 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:28.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.597 --rc genhtml_branch_coverage=1 00:21:28.597 --rc genhtml_function_coverage=1 00:21:28.597 --rc genhtml_legend=1 00:21:28.597 --rc geninfo_all_blocks=1 00:21:28.597 --rc geninfo_unexecuted_blocks=1 00:21:28.597 00:21:28.597 ' 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:28.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:28.597 --rc genhtml_branch_coverage=1 00:21:28.597 --rc genhtml_function_coverage=1 00:21:28.597 --rc genhtml_legend=1 00:21:28.597 --rc geninfo_all_blocks=1 00:21:28.597 --rc geninfo_unexecuted_blocks=1 00:21:28.597 00:21:28.597 ' 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:28.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:21:28.597 04:11:57 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:31.123 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:31.123 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.123 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:31.124 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:31.124 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:31.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:21:31.124 00:21:31.124 --- 10.0.0.2 ping statistics --- 00:21:31.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.124 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:31.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:21:31.124 00:21:31.124 --- 10.0.0.1 ping statistics --- 00:21:31.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.124 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=292082 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 292082 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 292082 ']' 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:31.124 [2024-12-09 04:11:59.396963] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:21:31.124 [2024-12-09 04:11:59.397050] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.124 [2024-12-09 04:11:59.468069] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:31.124 [2024-12-09 04:11:59.527063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.124 [2024-12-09 04:11:59.527112] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.124 [2024-12-09 04:11:59.527140] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.124 [2024-12-09 04:11:59.527151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.124 [2024-12-09 04:11:59.527161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.124 [2024-12-09 04:11:59.528691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.124 [2024-12-09 04:11:59.528755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.124 [2024-12-09 04:11:59.528823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:31.124 [2024-12-09 04:11:59.528826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:31.124 [2024-12-09 04:11:59.652205] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.124 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:31.384 Malloc0 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:31.384 [2024-12-09 04:11:59.738499] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:31.384 [ 00:21:31.384 { 00:21:31.384 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:31.384 "subtype": "Discovery", 00:21:31.384 "listen_addresses": [ 00:21:31.384 { 00:21:31.384 "trtype": "TCP", 00:21:31.384 "adrfam": "IPv4", 00:21:31.384 "traddr": "10.0.0.2", 00:21:31.384 "trsvcid": "4420" 00:21:31.384 } 00:21:31.384 ], 00:21:31.384 "allow_any_host": true, 00:21:31.384 "hosts": [] 00:21:31.384 }, 00:21:31.384 { 00:21:31.384 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.384 "subtype": "NVMe", 00:21:31.384 "listen_addresses": [ 00:21:31.384 { 00:21:31.384 "trtype": "TCP", 00:21:31.384 "adrfam": "IPv4", 00:21:31.384 "traddr": "10.0.0.2", 00:21:31.384 "trsvcid": "4420" 00:21:31.384 } 00:21:31.384 ], 00:21:31.384 "allow_any_host": true, 00:21:31.384 "hosts": [], 00:21:31.384 "serial_number": "SPDK00000000000001", 00:21:31.384 "model_number": "SPDK bdev Controller", 00:21:31.384 "max_namespaces": 32, 00:21:31.384 "min_cntlid": 1, 00:21:31.384 "max_cntlid": 65519, 00:21:31.384 "namespaces": [ 00:21:31.384 { 00:21:31.384 "nsid": 1, 00:21:31.384 "bdev_name": "Malloc0", 00:21:31.384 "name": "Malloc0", 00:21:31.384 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:31.384 "eui64": "ABCDEF0123456789", 00:21:31.384 "uuid": "c1f55b3a-c777-461f-8bb4-17aef1175c5a" 00:21:31.384 } 00:21:31.384 ] 00:21:31.384 } 00:21:31.384 ] 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.384 04:11:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:31.384 [2024-12-09 04:11:59.779885] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:21:31.384 [2024-12-09 04:11:59.779929] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292112 ] 00:21:31.384 [2024-12-09 04:11:59.832947] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:31.384 [2024-12-09 04:11:59.833017] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:31.384 [2024-12-09 04:11:59.833028] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:31.384 [2024-12-09 04:11:59.833052] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:31.384 [2024-12-09 04:11:59.833067] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:31.384 [2024-12-09 04:11:59.836732] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:31.384 [2024-12-09 04:11:59.836806] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2231690 0 00:21:31.384 [2024-12-09 04:11:59.836945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:31.385 [2024-12-09 04:11:59.836962] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:31.385 [2024-12-09 04:11:59.836977] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:31.385 [2024-12-09 04:11:59.836984] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:31.385 [2024-12-09 04:11:59.837030] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.837042] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.837049] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2231690) 00:21:31.385 [2024-12-09 04:11:59.837066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:31.385 [2024-12-09 04:11:59.837092] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293100, cid 0, qid 0 00:21:31.385 [2024-12-09 04:11:59.844286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.385 [2024-12-09 04:11:59.844305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.385 [2024-12-09 04:11:59.844313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.844320] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293100) on tqpair=0x2231690 00:21:31.385 [2024-12-09 04:11:59.844336] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:31.385 [2024-12-09 04:11:59.844348] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:31.385 [2024-12-09 04:11:59.844359] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:31.385 [2024-12-09 04:11:59.844383] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.844393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.844399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2231690) 00:21:31.385 [2024-12-09 04:11:59.844415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.385 [2024-12-09 04:11:59.844441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293100, cid 0, qid 0 00:21:31.385 [2024-12-09 04:11:59.844583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.385 [2024-12-09 04:11:59.844597] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.385 [2024-12-09 04:11:59.844604] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.844611] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293100) on tqpair=0x2231690 00:21:31.385 [2024-12-09 04:11:59.844625] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:31.385 [2024-12-09 04:11:59.844640] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:31.385 [2024-12-09 04:11:59.844652] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.844660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.844667] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2231690) 00:21:31.385 [2024-12-09 04:11:59.844677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.385 [2024-12-09 04:11:59.844699] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293100, cid 0, qid 0 00:21:31.385 [2024-12-09 04:11:59.844776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.385 [2024-12-09 04:11:59.844789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.385 [2024-12-09 04:11:59.844796] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.844802] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293100) on tqpair=0x2231690 00:21:31.385 [2024-12-09 04:11:59.844811] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:31.385 [2024-12-09 04:11:59.844825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:31.385 [2024-12-09 04:11:59.844837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.844845] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.844851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2231690) 00:21:31.385 [2024-12-09 04:11:59.844862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.385 [2024-12-09 04:11:59.844882] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293100, cid 0, qid 0 00:21:31.385 [2024-12-09 04:11:59.844960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.385 [2024-12-09 04:11:59.844973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.385 [2024-12-09 04:11:59.844980] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.844987] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293100) on tqpair=0x2231690 00:21:31.385 [2024-12-09 04:11:59.844995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:31.385 [2024-12-09 04:11:59.845012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.845021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.845027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2231690) 00:21:31.385 [2024-12-09 04:11:59.845038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.385 [2024-12-09 04:11:59.845064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293100, cid 0, qid 0 00:21:31.385 [2024-12-09 04:11:59.845135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.385 [2024-12-09 04:11:59.845148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.385 [2024-12-09 04:11:59.845155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.845161] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293100) on tqpair=0x2231690 00:21:31.385 [2024-12-09 04:11:59.845169] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:31.385 [2024-12-09 04:11:59.845178] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:31.385 [2024-12-09 04:11:59.845191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:31.385 [2024-12-09 04:11:59.845301] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:31.385 [2024-12-09 04:11:59.845311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:31.385 [2024-12-09 04:11:59.845326] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.845334] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.845340] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2231690) 00:21:31.385 [2024-12-09 04:11:59.845350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.385 [2024-12-09 04:11:59.845372] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293100, cid 0, qid 0 00:21:31.385 [2024-12-09 04:11:59.845489] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.385 [2024-12-09 04:11:59.845503] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.385 [2024-12-09 04:11:59.845510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.845517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293100) on tqpair=0x2231690 00:21:31.385 [2024-12-09 04:11:59.845525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:31.385 [2024-12-09 04:11:59.845542] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.845551] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.845558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2231690) 00:21:31.385 [2024-12-09 04:11:59.845568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.385 [2024-12-09 04:11:59.845589] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293100, cid 0, qid 0 00:21:31.385 [2024-12-09 04:11:59.845665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.385 [2024-12-09 04:11:59.845677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.385 [2024-12-09 04:11:59.845684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.845691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293100) on tqpair=0x2231690 00:21:31.385 [2024-12-09 04:11:59.845698] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:31.385 [2024-12-09 04:11:59.845707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:31.385 [2024-12-09 04:11:59.845719] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:31.385 [2024-12-09 04:11:59.845740] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:31.385 [2024-12-09 04:11:59.845757] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.845765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2231690) 00:21:31.385 [2024-12-09 04:11:59.845776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.385 [2024-12-09 04:11:59.845797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293100, cid 0, qid 0 00:21:31.385 [2024-12-09 04:11:59.845925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:31.385 [2024-12-09 04:11:59.845940] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:31.385 [2024-12-09 04:11:59.845947] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.845954] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2231690): datao=0, datal=4096, cccid=0 00:21:31.385 [2024-12-09 04:11:59.845962] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2293100) on tqpair(0x2231690): expected_datao=0, payload_size=4096 00:21:31.385 [2024-12-09 04:11:59.845970] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.845987] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:31.385 [2024-12-09 04:11:59.845997] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.887373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.386 [2024-12-09 04:11:59.887393] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.386 [2024-12-09 04:11:59.887401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.887408] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293100) on tqpair=0x2231690 00:21:31.386 [2024-12-09 04:11:59.887427] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:31.386 [2024-12-09 04:11:59.887438] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:31.386 [2024-12-09 04:11:59.887446] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:31.386 [2024-12-09 04:11:59.887455] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:31.386 [2024-12-09 04:11:59.887462] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:31.386 [2024-12-09 04:11:59.887471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:31.386 [2024-12-09 04:11:59.887485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:31.386 [2024-12-09 04:11:59.887498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.887507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.887513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2231690) 00:21:31.386 [2024-12-09 04:11:59.887525] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:31.386 [2024-12-09 04:11:59.887549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293100, cid 0, qid 0 00:21:31.386 [2024-12-09 04:11:59.887635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.386 [2024-12-09 04:11:59.887649] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.386 [2024-12-09 04:11:59.887656] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.887663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293100) on tqpair=0x2231690 00:21:31.386 [2024-12-09 04:11:59.887680] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.887688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.887695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2231690) 00:21:31.386 [2024-12-09 04:11:59.887705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.386 [2024-12-09 04:11:59.887715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.887723] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.887729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2231690) 00:21:31.386 [2024-12-09 04:11:59.887737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.386 [2024-12-09 04:11:59.887747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.887753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.887759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2231690) 00:21:31.386 [2024-12-09 04:11:59.887768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.386 [2024-12-09 04:11:59.887778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.887785] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.887791] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690) 00:21:31.386 [2024-12-09 04:11:59.887799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.386 [2024-12-09 04:11:59.887808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:31.386 [2024-12-09 04:11:59.887828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:31.386 [2024-12-09 04:11:59.887842] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.887850] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2231690) 00:21:31.386 [2024-12-09 04:11:59.887860] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.386 [2024-12-09 04:11:59.887884] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293100, cid 0, qid 0 00:21:31.386 [2024-12-09 04:11:59.887895] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293280, cid 1, qid 0 00:21:31.386 [2024-12-09 04:11:59.887903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293400, cid 2, qid 0 00:21:31.386 [2024-12-09 04:11:59.887911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0 00:21:31.386 [2024-12-09 04:11:59.887919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293700, cid 4, qid 0 00:21:31.386 [2024-12-09 04:11:59.888057] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.386 [2024-12-09 04:11:59.888070] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.386 [2024-12-09 04:11:59.888077] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.888084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293700) on tqpair=0x2231690 00:21:31.386 [2024-12-09 04:11:59.888092] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:31.386 [2024-12-09 04:11:59.888101] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:31.386 [2024-12-09 04:11:59.888119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.888133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2231690) 00:21:31.386 [2024-12-09 04:11:59.888144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.386 [2024-12-09 04:11:59.888166] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293700, cid 4, qid 0 00:21:31.386 [2024-12-09 04:11:59.888266] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:31.386 [2024-12-09 04:11:59.892291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:31.386 [2024-12-09 04:11:59.892300] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.892306] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2231690): datao=0, datal=4096, cccid=4 00:21:31.386 [2024-12-09 04:11:59.892314] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2293700) on tqpair(0x2231690): expected_datao=0, payload_size=4096 00:21:31.386 [2024-12-09 04:11:59.892322] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.892332] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.892339] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.892352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.386 [2024-12-09 04:11:59.892361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.386 [2024-12-09 04:11:59.892368] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.892375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293700) on tqpair=0x2231690 00:21:31.386 [2024-12-09 04:11:59.892394] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:31.386 [2024-12-09 04:11:59.892434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.892445] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2231690) 00:21:31.386 [2024-12-09 04:11:59.892456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.386 [2024-12-09 04:11:59.892468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.892476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.892482] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2231690) 00:21:31.386 [2024-12-09 04:11:59.892491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.386 [2024-12-09 04:11:59.892518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293700, cid 4, qid 0 00:21:31.386 [2024-12-09 04:11:59.892530] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293880, cid 5, qid 0 00:21:31.386 [2024-12-09 04:11:59.892715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:31.386 [2024-12-09 04:11:59.892731] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:31.386 [2024-12-09 04:11:59.892738] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.892744] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2231690): datao=0, datal=1024, cccid=4 00:21:31.386 [2024-12-09 04:11:59.892752] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2293700) on tqpair(0x2231690): expected_datao=0, payload_size=1024 00:21:31.386 [2024-12-09 04:11:59.892759] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.892769] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.892776] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.892784] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.386 [2024-12-09 04:11:59.892793] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.386 [2024-12-09 04:11:59.892804] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.892811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293880) on tqpair=0x2231690 00:21:31.386 [2024-12-09 04:11:59.938289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.386 [2024-12-09 04:11:59.938307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.386 [2024-12-09 04:11:59.938315] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.938322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293700) on tqpair=0x2231690 00:21:31.386 [2024-12-09 04:11:59.938340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.386 [2024-12-09 04:11:59.938350] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2231690) 00:21:31.386 [2024-12-09 04:11:59.938362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.386 [2024-12-09 04:11:59.938392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293700, cid 4, qid 0 00:21:31.386 [2024-12-09 04:11:59.938530] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:31.386 [2024-12-09 04:11:59.938543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:31.387 [2024-12-09 04:11:59.938550] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:31.387 [2024-12-09 04:11:59.938556] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2231690): datao=0, datal=3072, cccid=4 00:21:31.387 [2024-12-09 04:11:59.938564] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2293700) on tqpair(0x2231690): expected_datao=0, payload_size=3072 00:21:31.387 [2024-12-09 04:11:59.938571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.387 [2024-12-09 04:11:59.938591] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:31.387 [2024-12-09 04:11:59.938600] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:31.649 [2024-12-09 04:11:59.979384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.649 [2024-12-09 04:11:59.979403] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.649 [2024-12-09 04:11:59.979411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.650 [2024-12-09 04:11:59.979418] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293700) on tqpair=0x2231690 00:21:31.650 [2024-12-09 04:11:59.979434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.650 [2024-12-09 04:11:59.979444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2231690) 00:21:31.650 [2024-12-09 04:11:59.979456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.650 [2024-12-09 04:11:59.979485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293700, cid 4, qid 0 00:21:31.650 [2024-12-09 04:11:59.979583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:31.650 [2024-12-09 04:11:59.979595] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:31.650 [2024-12-09 04:11:59.979603] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:31.650 [2024-12-09 04:11:59.979609] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2231690): datao=0, datal=8, cccid=4 00:21:31.650 [2024-12-09 04:11:59.979617] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2293700) on tqpair(0x2231690): expected_datao=0, payload_size=8 00:21:31.650 [2024-12-09 04:11:59.979624] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.650 [2024-12-09 04:11:59.979634] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:31.650 [2024-12-09 04:11:59.979641] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:31.650 [2024-12-09 04:12:00.024292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.650 [2024-12-09 04:12:00.024331] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.650 [2024-12-09 04:12:00.024339] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.650 [2024-12-09 04:12:00.024354] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293700) on tqpair=0x2231690 00:21:31.650 ===================================================== 00:21:31.650 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:31.650 ===================================================== 00:21:31.650 Controller Capabilities/Features 00:21:31.650 ================================ 00:21:31.650 Vendor ID: 0000 00:21:31.650 Subsystem Vendor ID: 0000 00:21:31.650 Serial Number: .................... 00:21:31.650 Model Number: ........................................ 00:21:31.650 Firmware Version: 25.01 00:21:31.650 Recommended Arb Burst: 0 00:21:31.650 IEEE OUI Identifier: 00 00 00 00:21:31.650 Multi-path I/O 00:21:31.650 May have multiple subsystem ports: No 00:21:31.650 May have multiple controllers: No 00:21:31.650 Associated with SR-IOV VF: No 00:21:31.650 Max Data Transfer Size: 131072 00:21:31.650 Max Number of Namespaces: 0 00:21:31.650 Max Number of I/O Queues: 1024 00:21:31.650 NVMe Specification Version (VS): 1.3 00:21:31.650 NVMe Specification Version (Identify): 1.3 00:21:31.650 Maximum Queue Entries: 128 00:21:31.650 Contiguous Queues Required: Yes 00:21:31.650 Arbitration Mechanisms Supported 00:21:31.650 Weighted Round Robin: Not Supported 00:21:31.650 Vendor Specific: Not Supported 00:21:31.650 Reset Timeout: 15000 ms 00:21:31.650 Doorbell Stride: 4 bytes 00:21:31.650 NVM Subsystem Reset: Not Supported 00:21:31.650 Command Sets Supported 00:21:31.650 NVM Command Set: Supported 00:21:31.650 Boot Partition: Not Supported 00:21:31.650 Memory Page Size Minimum: 4096 bytes 00:21:31.650 Memory Page Size Maximum: 4096 bytes 00:21:31.650 Persistent Memory Region: Not Supported 00:21:31.650 Optional Asynchronous Events Supported 00:21:31.650 Namespace Attribute Notices: Not Supported 00:21:31.650 Firmware Activation Notices: Not Supported 00:21:31.650 ANA Change Notices: Not Supported 00:21:31.650 PLE Aggregate Log Change Notices: Not Supported 00:21:31.650 LBA Status Info Alert Notices: Not Supported 00:21:31.650 EGE Aggregate Log Change Notices: Not Supported 00:21:31.650 Normal NVM Subsystem Shutdown event: Not Supported 00:21:31.650 Zone Descriptor Change Notices: Not Supported 00:21:31.650 Discovery Log Change Notices: Supported 00:21:31.650 Controller Attributes 00:21:31.650 128-bit Host Identifier: Not Supported 00:21:31.650 Non-Operational Permissive Mode: Not Supported 00:21:31.650 NVM Sets: Not Supported 00:21:31.650 Read Recovery Levels: Not Supported 00:21:31.650 Endurance Groups: Not Supported 00:21:31.650 Predictable Latency Mode: Not Supported 00:21:31.650 Traffic Based Keep ALive: Not Supported 00:21:31.650 Namespace Granularity: Not Supported 00:21:31.650 SQ Associations: Not Supported 00:21:31.650 UUID List: Not Supported 00:21:31.650 Multi-Domain Subsystem: Not Supported 00:21:31.650 Fixed Capacity Management: Not Supported 00:21:31.650 Variable Capacity Management: Not Supported 00:21:31.650 Delete Endurance Group: Not Supported 00:21:31.650 Delete NVM Set: Not Supported 00:21:31.650 Extended LBA Formats Supported: Not Supported 00:21:31.650 Flexible Data Placement Supported: Not Supported 00:21:31.650 00:21:31.650 Controller Memory Buffer Support 00:21:31.650 ================================ 00:21:31.650 Supported: No 00:21:31.650 00:21:31.650 Persistent Memory Region Support 00:21:31.650 ================================ 00:21:31.650 Supported: No 00:21:31.650 00:21:31.650 Admin Command Set Attributes 00:21:31.650 ============================ 00:21:31.650 Security Send/Receive: Not Supported 00:21:31.650 Format NVM: Not Supported 00:21:31.650 Firmware Activate/Download: Not Supported 00:21:31.650 Namespace Management: Not Supported 00:21:31.650 Device Self-Test: Not Supported 00:21:31.650 Directives: Not Supported 00:21:31.650 NVMe-MI: Not Supported 00:21:31.650 Virtualization Management: Not Supported 00:21:31.650 Doorbell Buffer Config: Not Supported 00:21:31.650 Get LBA Status Capability: Not Supported 00:21:31.650 Command & Feature Lockdown Capability: Not Supported 00:21:31.650 Abort Command Limit: 1 00:21:31.650 Async Event Request Limit: 4 00:21:31.650 Number of Firmware Slots: N/A 00:21:31.650 Firmware Slot 1 Read-Only: N/A 00:21:31.650 Firmware Activation Without Reset: N/A 00:21:31.650 Multiple Update Detection Support: N/A 00:21:31.650 Firmware Update Granularity: No Information Provided 00:21:31.650 Per-Namespace SMART Log: No 00:21:31.650 Asymmetric Namespace Access Log Page: Not Supported 00:21:31.650 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:31.650 Command Effects Log Page: Not Supported 00:21:31.650 Get Log Page Extended Data: Supported 00:21:31.650 Telemetry Log Pages: Not Supported 00:21:31.650 Persistent Event Log Pages: Not Supported 00:21:31.650 Supported Log Pages Log Page: May Support 00:21:31.650 Commands Supported & Effects Log Page: Not Supported 00:21:31.650 Feature Identifiers & Effects Log Page:May Support 00:21:31.650 NVMe-MI Commands & Effects Log Page: May Support 00:21:31.650 Data Area 4 for Telemetry Log: Not Supported 00:21:31.650 Error Log Page Entries Supported: 128 00:21:31.650 Keep Alive: Not Supported 00:21:31.650 00:21:31.650 NVM Command Set Attributes 00:21:31.650 ========================== 00:21:31.650 Submission Queue Entry Size 00:21:31.650 Max: 1 00:21:31.650 Min: 1 00:21:31.650 Completion Queue Entry Size 00:21:31.650 Max: 1 00:21:31.650 Min: 1 00:21:31.650 Number of Namespaces: 0 00:21:31.650 Compare Command: Not Supported 00:21:31.650 Write Uncorrectable Command: Not Supported 00:21:31.650 Dataset Management Command: Not Supported 00:21:31.650 Write Zeroes Command: Not Supported 00:21:31.650 Set Features Save Field: Not Supported 00:21:31.650 Reservations: Not Supported 00:21:31.650 Timestamp: Not Supported 00:21:31.650 Copy: Not Supported 00:21:31.650 Volatile Write Cache: Not Present 00:21:31.650 Atomic Write Unit (Normal): 1 00:21:31.650 Atomic Write Unit (PFail): 1 00:21:31.650 Atomic Compare & Write Unit: 1 00:21:31.650 Fused Compare & Write: Supported 00:21:31.650 Scatter-Gather List 00:21:31.650 SGL Command Set: Supported 00:21:31.650 SGL Keyed: Supported 00:21:31.650 SGL Bit Bucket Descriptor: Not Supported 00:21:31.650 SGL Metadata Pointer: Not Supported 00:21:31.650 Oversized SGL: Not Supported 00:21:31.650 SGL Metadata Address: Not Supported 00:21:31.650 SGL Offset: Supported 00:21:31.650 Transport SGL Data Block: Not Supported 00:21:31.650 Replay Protected Memory Block: Not Supported 00:21:31.651 00:21:31.651 Firmware Slot Information 00:21:31.651 ========================= 00:21:31.651 Active slot: 0 00:21:31.651 00:21:31.651 00:21:31.651 Error Log 00:21:31.651 ========= 00:21:31.651 00:21:31.651 Active Namespaces 00:21:31.651 ================= 00:21:31.651 Discovery Log Page 00:21:31.651 ================== 00:21:31.651 Generation Counter: 2 00:21:31.651 Number of Records: 2 00:21:31.651 Record Format: 0 00:21:31.651 00:21:31.651 Discovery Log Entry 0 00:21:31.651 ---------------------- 00:21:31.651 Transport Type: 3 (TCP) 00:21:31.651 Address Family: 1 (IPv4) 00:21:31.651 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:31.651 Entry Flags: 00:21:31.651 Duplicate Returned Information: 1 00:21:31.651 Explicit Persistent Connection Support for Discovery: 1 00:21:31.651 Transport Requirements: 00:21:31.651 Secure Channel: Not Required 00:21:31.651 Port ID: 0 (0x0000) 00:21:31.651 Controller ID: 65535 (0xffff) 00:21:31.651 Admin Max SQ Size: 128 00:21:31.651 Transport Service Identifier: 4420 00:21:31.651 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:31.651 Transport Address: 10.0.0.2 00:21:31.651 Discovery Log Entry 1 00:21:31.651 ---------------------- 00:21:31.651 Transport Type: 3 (TCP) 00:21:31.651 Address Family: 1 (IPv4) 00:21:31.651 Subsystem Type: 2 (NVM Subsystem) 00:21:31.651 Entry Flags: 00:21:31.651 Duplicate Returned Information: 0 00:21:31.651 Explicit Persistent Connection Support for Discovery: 0 00:21:31.651 Transport Requirements: 00:21:31.651 Secure Channel: Not Required 00:21:31.651 Port ID: 0 (0x0000) 00:21:31.651 Controller ID: 65535 (0xffff) 00:21:31.651 Admin Max SQ Size: 128 00:21:31.651 Transport Service Identifier: 4420 00:21:31.651 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:31.651 Transport Address: 10.0.0.2 [2024-12-09 04:12:00.024479] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:31.651 [2024-12-09 04:12:00.024502] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293100) on tqpair=0x2231690 00:21:31.651 [2024-12-09 04:12:00.024515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.651 [2024-12-09 04:12:00.024534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293280) on tqpair=0x2231690 00:21:31.651 [2024-12-09 04:12:00.024542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.651 [2024-12-09 04:12:00.024551] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293400) on tqpair=0x2231690 00:21:31.651 [2024-12-09 04:12:00.024558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.651 [2024-12-09 04:12:00.024567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690 00:21:31.651 [2024-12-09 04:12:00.024574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.651 [2024-12-09 04:12:00.024594] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.024605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.024611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690) 00:21:31.651 [2024-12-09 04:12:00.024623] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.651 [2024-12-09 04:12:00.024650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0 00:21:31.651 [2024-12-09 04:12:00.024747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.651 [2024-12-09 04:12:00.024762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.651 [2024-12-09 04:12:00.024770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.024777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690 00:21:31.651 [2024-12-09 04:12:00.024790] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.024798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.024804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690) 00:21:31.651 [2024-12-09 04:12:00.024815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.651 [2024-12-09 04:12:00.024843] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0 00:21:31.651 [2024-12-09 04:12:00.024967] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.651 [2024-12-09 04:12:00.024980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.651 [2024-12-09 04:12:00.024987] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.024994] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690 00:21:31.651 [2024-12-09 04:12:00.025002] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:31.651 [2024-12-09 04:12:00.025010] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:31.651 [2024-12-09 04:12:00.025026] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.025036] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.025043] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690) 00:21:31.651 [2024-12-09 04:12:00.025053] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.651 [2024-12-09 04:12:00.025079] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0 00:21:31.651 [2024-12-09 04:12:00.025166] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.651 [2024-12-09 04:12:00.025181] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.651 [2024-12-09 04:12:00.025188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.025194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690 00:21:31.651 [2024-12-09 04:12:00.025212] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.025222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.025229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690) 00:21:31.651 [2024-12-09 04:12:00.025239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.651 [2024-12-09 04:12:00.025260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0 00:21:31.651 [2024-12-09 04:12:00.025360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.651 [2024-12-09 04:12:00.025374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.651 [2024-12-09 04:12:00.025381] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.025388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690 00:21:31.651 [2024-12-09 04:12:00.025404] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.025414] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.025421] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690) 00:21:31.651 [2024-12-09 04:12:00.025431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.651 [2024-12-09 04:12:00.025453] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0 00:21:31.651 [2024-12-09 04:12:00.025549] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.651 [2024-12-09 04:12:00.025564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.651 [2024-12-09 04:12:00.025571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.025578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690 00:21:31.651 [2024-12-09 04:12:00.025594] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.025604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.025611] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690) 00:21:31.651 [2024-12-09 04:12:00.025621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.651 [2024-12-09 04:12:00.025642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0 00:21:31.651 [2024-12-09 04:12:00.025723] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.651 [2024-12-09 04:12:00.025736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.651 [2024-12-09 04:12:00.025743] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.025750] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690 00:21:31.651 [2024-12-09 04:12:00.025767] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.025777] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.025784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690) 00:21:31.651 [2024-12-09 04:12:00.025794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.651 [2024-12-09 04:12:00.025816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0 00:21:31.651 [2024-12-09 04:12:00.025910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.651 [2024-12-09 04:12:00.025924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.651 [2024-12-09 04:12:00.025931] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.025938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690 00:21:31.651 [2024-12-09 04:12:00.025955] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.651 [2024-12-09 04:12:00.025965] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.025971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690) 00:21:31.652 [2024-12-09 04:12:00.025981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.652 [2024-12-09 04:12:00.026003] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0 00:21:31.652 [2024-12-09 04:12:00.026099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.652 [2024-12-09 04:12:00.026113] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.652 [2024-12-09 04:12:00.026120] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.026127] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690 00:21:31.652 [2024-12-09 04:12:00.026144] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.026154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.026160] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690) 00:21:31.652 [2024-12-09 04:12:00.026170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.652 [2024-12-09 04:12:00.026192] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0 00:21:31.652 [2024-12-09 04:12:00.026299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.652 [2024-12-09 04:12:00.026314] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.652 [2024-12-09 04:12:00.026321] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.026328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690 00:21:31.652 [2024-12-09 04:12:00.026344] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.026354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.026360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690) 00:21:31.652 [2024-12-09 04:12:00.026371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.652 [2024-12-09 04:12:00.026392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0 00:21:31.652 [2024-12-09 04:12:00.026517] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.652 [2024-12-09 04:12:00.026531] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.652 [2024-12-09 04:12:00.026538] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.026544] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690 00:21:31.652 [2024-12-09 04:12:00.026560] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.026570] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.026576] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690) 00:21:31.652 [2024-12-09 04:12:00.026587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.652 [2024-12-09 04:12:00.026619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0 00:21:31.652 [2024-12-09 04:12:00.026698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.652 [2024-12-09 04:12:00.026713] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.652 [2024-12-09 04:12:00.026720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.026727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690 00:21:31.652 [2024-12-09 04:12:00.026743] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.026753] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.026760] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690) 00:21:31.652 [2024-12-09 04:12:00.026770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.652 [2024-12-09 04:12:00.026791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0 00:21:31.652 [2024-12-09 04:12:00.026920] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.652 [2024-12-09 04:12:00.026934] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.652 [2024-12-09 04:12:00.026941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.026948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690 00:21:31.652 [2024-12-09 04:12:00.026964] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.026974] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.026980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690) 00:21:31.652 [2024-12-09 04:12:00.026991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.652 [2024-12-09 04:12:00.027013] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0 00:21:31.652 [2024-12-09 04:12:00.027122] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.652 [2024-12-09 04:12:00.027136] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.652 [2024-12-09 04:12:00.027143] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.027150] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690 00:21:31.652 [2024-12-09 04:12:00.027166] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.027176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.027182] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690) 00:21:31.652 [2024-12-09 04:12:00.027193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.652 [2024-12-09 04:12:00.027214] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0 00:21:31.652 [2024-12-09 04:12:00.027326] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.652 [2024-12-09 04:12:00.027342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.652 [2024-12-09 04:12:00.027349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.027356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690 00:21:31.652 [2024-12-09 04:12:00.027373] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.027383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.027389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690) 00:21:31.652 [2024-12-09 04:12:00.027399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.652 [2024-12-09 04:12:00.027421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0 00:21:31.652 [2024-12-09 04:12:00.027505] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.652 [2024-12-09 04:12:00.027519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.652 [2024-12-09 04:12:00.027535] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.027542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690 00:21:31.652 [2024-12-09 04:12:00.027559] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.027569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.027575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690) 00:21:31.652 [2024-12-09 04:12:00.027585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.652 [2024-12-09 04:12:00.027606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0 00:21:31.652 [2024-12-09 04:12:00.027731] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.652 [2024-12-09 04:12:00.027744] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.652 [2024-12-09 04:12:00.027751] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.027758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690 00:21:31.652 [2024-12-09 04:12:00.027774] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.027783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.027790] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690) 00:21:31.652 [2024-12-09 04:12:00.027800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.652 [2024-12-09 04:12:00.027821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0 00:21:31.652 [2024-12-09 04:12:00.027950] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.652 [2024-12-09 04:12:00.027964] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.652 [2024-12-09 04:12:00.027971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.027978] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690 00:21:31.652 [2024-12-09 04:12:00.027994] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.028003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.028009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690) 00:21:31.652 [2024-12-09 04:12:00.028020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.652 [2024-12-09 04:12:00.028041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0 00:21:31.652 [2024-12-09 04:12:00.028119] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.652 [2024-12-09 04:12:00.028133] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.652 [2024-12-09 04:12:00.028140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.028147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690 00:21:31.652 [2024-12-09 04:12:00.028163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.028173] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.028179] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690) 00:21:31.652 [2024-12-09 04:12:00.028189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.652 [2024-12-09 04:12:00.028211] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0 00:21:31.652 [2024-12-09 04:12:00.032283] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.652 [2024-12-09 04:12:00.032300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.652 [2024-12-09 04:12:00.032308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.032322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690 00:21:31.652 [2024-12-09 04:12:00.032342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.652 [2024-12-09 04:12:00.032352] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.032358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2231690) 00:21:31.653 [2024-12-09 04:12:00.032369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.653 [2024-12-09 04:12:00.032392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2293580, cid 3, qid 0 00:21:31.653 [2024-12-09 04:12:00.032527] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.653 [2024-12-09 04:12:00.032541] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.653 [2024-12-09 04:12:00.032548] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.032555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2293580) on tqpair=0x2231690 00:21:31.653 [2024-12-09 04:12:00.032568] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:21:31.653 00:21:31.653 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:31.653 [2024-12-09 04:12:00.067791] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:21:31.653 [2024-12-09 04:12:00.067832] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292197 ] 00:21:31.653 [2024-12-09 04:12:00.117887] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:31.653 [2024-12-09 04:12:00.117944] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:31.653 [2024-12-09 04:12:00.117955] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:31.653 [2024-12-09 04:12:00.117978] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:31.653 [2024-12-09 04:12:00.117992] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:31.653 [2024-12-09 04:12:00.121687] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:31.653 [2024-12-09 04:12:00.121749] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1287690 0 00:21:31.653 [2024-12-09 04:12:00.121881] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:31.653 [2024-12-09 04:12:00.121897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:31.653 [2024-12-09 04:12:00.121909] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:31.653 [2024-12-09 04:12:00.121916] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:31.653 [2024-12-09 04:12:00.121952] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.121964] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.121970] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690) 00:21:31.653 [2024-12-09 04:12:00.121985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:31.653 [2024-12-09 04:12:00.122011] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0 00:21:31.653 [2024-12-09 04:12:00.129288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.653 [2024-12-09 04:12:00.129307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.653 [2024-12-09 04:12:00.129315] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.129322] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690 00:21:31.653 [2024-12-09 04:12:00.129336] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:31.653 [2024-12-09 04:12:00.129363] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:31.653 [2024-12-09 04:12:00.129373] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:31.653 [2024-12-09 04:12:00.129394] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.129403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.129410] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690) 00:21:31.653 [2024-12-09 04:12:00.129422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.653 [2024-12-09 04:12:00.129447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0 00:21:31.653 [2024-12-09 04:12:00.129548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.653 [2024-12-09 04:12:00.129562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.653 [2024-12-09 04:12:00.129570] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.129577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690 00:21:31.653 [2024-12-09 04:12:00.129590] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:31.653 [2024-12-09 04:12:00.129604] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:31.653 [2024-12-09 04:12:00.129617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.129625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.129632] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690) 00:21:31.653 [2024-12-09 04:12:00.129643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.653 [2024-12-09 04:12:00.129665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0 00:21:31.653 [2024-12-09 04:12:00.129793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.653 [2024-12-09 04:12:00.129805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.653 [2024-12-09 04:12:00.129812] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.129819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690 00:21:31.653 [2024-12-09 04:12:00.129829] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:31.653 [2024-12-09 04:12:00.129843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:31.653 [2024-12-09 04:12:00.129855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.129863] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.129869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690) 00:21:31.653 [2024-12-09 04:12:00.129880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.653 [2024-12-09 04:12:00.129902] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0 00:21:31.653 [2024-12-09 04:12:00.129991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.653 [2024-12-09 04:12:00.130008] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.653 [2024-12-09 04:12:00.130016] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.130022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690 00:21:31.653 [2024-12-09 04:12:00.130031] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:31.653 [2024-12-09 04:12:00.130064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.130074] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.130080] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690) 00:21:31.653 [2024-12-09 04:12:00.130091] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.653 [2024-12-09 04:12:00.130112] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0 00:21:31.653 [2024-12-09 04:12:00.130238] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.653 [2024-12-09 04:12:00.130250] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.653 [2024-12-09 04:12:00.130258] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.130265] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690 00:21:31.653 [2024-12-09 04:12:00.130282] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:31.653 [2024-12-09 04:12:00.130292] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:31.653 [2024-12-09 04:12:00.130306] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:31.653 [2024-12-09 04:12:00.130428] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:31.653 [2024-12-09 04:12:00.130436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:31.653 [2024-12-09 04:12:00.130450] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.130458] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.130464] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690) 00:21:31.653 [2024-12-09 04:12:00.130475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.653 [2024-12-09 04:12:00.130497] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0 00:21:31.653 [2024-12-09 04:12:00.130578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.653 [2024-12-09 04:12:00.130592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.653 [2024-12-09 04:12:00.130600] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.130607] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690 00:21:31.653 [2024-12-09 04:12:00.130615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:31.653 [2024-12-09 04:12:00.130632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.130641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.653 [2024-12-09 04:12:00.130648] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690) 00:21:31.653 [2024-12-09 04:12:00.130658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.653 [2024-12-09 04:12:00.130680] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0 00:21:31.653 [2024-12-09 04:12:00.130776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.653 [2024-12-09 04:12:00.130790] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.654 [2024-12-09 04:12:00.130797] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.130804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690 00:21:31.654 [2024-12-09 04:12:00.130812] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:31.654 [2024-12-09 04:12:00.130820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:31.654 [2024-12-09 04:12:00.130834] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:31.654 [2024-12-09 04:12:00.130850] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:31.654 [2024-12-09 04:12:00.130866] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.130874] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690) 00:21:31.654 [2024-12-09 04:12:00.130885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.654 [2024-12-09 04:12:00.130907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0 00:21:31.654 [2024-12-09 04:12:00.131063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:31.654 [2024-12-09 04:12:00.131077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:31.654 [2024-12-09 04:12:00.131084] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.131090] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287690): datao=0, datal=4096, cccid=0 00:21:31.654 [2024-12-09 04:12:00.131112] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e9100) on tqpair(0x1287690): expected_datao=0, payload_size=4096 00:21:31.654 [2024-12-09 04:12:00.131121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.131132] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.131140] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.131162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.654 [2024-12-09 04:12:00.131174] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.654 [2024-12-09 04:12:00.131181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.131188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690 00:21:31.654 [2024-12-09 04:12:00.131205] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:31.654 [2024-12-09 04:12:00.131215] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:31.654 [2024-12-09 04:12:00.131223] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:31.654 [2024-12-09 04:12:00.131230] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:31.654 [2024-12-09 04:12:00.131239] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:31.654 [2024-12-09 04:12:00.131247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:31.654 [2024-12-09 04:12:00.131262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:31.654 [2024-12-09 04:12:00.131282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.131292] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.131303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690) 00:21:31.654 [2024-12-09 04:12:00.131314] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:31.654 [2024-12-09 04:12:00.131337] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0 00:21:31.654 [2024-12-09 04:12:00.131466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.654 [2024-12-09 04:12:00.131480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.654 [2024-12-09 04:12:00.131487] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.131494] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690 00:21:31.654 [2024-12-09 04:12:00.131505] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.131513] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.131519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1287690) 00:21:31.654 [2024-12-09 04:12:00.131529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.654 [2024-12-09 04:12:00.131539] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.131547] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.131553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1287690) 00:21:31.654 [2024-12-09 04:12:00.131562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.654 [2024-12-09 04:12:00.131572] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.131578] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.131584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1287690) 00:21:31.654 [2024-12-09 04:12:00.131593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.654 [2024-12-09 04:12:00.131603] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.131610] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.131616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:31.654 [2024-12-09 04:12:00.131624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.654 [2024-12-09 04:12:00.131633] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:31.654 [2024-12-09 04:12:00.131653] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:31.654 [2024-12-09 04:12:00.131666] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.131674] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287690) 00:21:31.654 [2024-12-09 04:12:00.131685] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.654 [2024-12-09 04:12:00.131708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9100, cid 0, qid 0 00:21:31.654 [2024-12-09 04:12:00.131719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9280, cid 1, qid 0 00:21:31.654 [2024-12-09 04:12:00.131727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9400, cid 2, qid 0 00:21:31.654 [2024-12-09 04:12:00.131735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:31.654 [2024-12-09 04:12:00.131743] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9700, cid 4, qid 0 00:21:31.654 [2024-12-09 04:12:00.131871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.654 [2024-12-09 04:12:00.131886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.654 [2024-12-09 04:12:00.131893] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.131900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9700) on tqpair=0x1287690 00:21:31.654 [2024-12-09 04:12:00.131909] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:31.654 [2024-12-09 04:12:00.131918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:31.654 [2024-12-09 04:12:00.131932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:31.654 [2024-12-09 04:12:00.131943] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:31.654 [2024-12-09 04:12:00.131954] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.131961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.131968] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287690) 00:21:31.654 [2024-12-09 04:12:00.131978] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:31.654 [2024-12-09 04:12:00.132000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9700, cid 4, qid 0 00:21:31.654 [2024-12-09 04:12:00.132081] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.654 [2024-12-09 04:12:00.132095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.654 [2024-12-09 04:12:00.132102] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.654 [2024-12-09 04:12:00.132109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9700) on tqpair=0x1287690 00:21:31.654 [2024-12-09 04:12:00.132177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:31.655 [2024-12-09 04:12:00.132197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:31.655 [2024-12-09 04:12:00.132212] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.132220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287690) 00:21:31.655 [2024-12-09 04:12:00.132231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.655 [2024-12-09 04:12:00.132252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9700, cid 4, qid 0 00:21:31.655 [2024-12-09 04:12:00.132388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:31.655 [2024-12-09 04:12:00.132402] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:31.655 [2024-12-09 04:12:00.132409] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.132416] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287690): datao=0, datal=4096, cccid=4 00:21:31.655 [2024-12-09 04:12:00.132423] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e9700) on tqpair(0x1287690): expected_datao=0, payload_size=4096 00:21:31.655 [2024-12-09 04:12:00.132431] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.132441] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.132448] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.132460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.655 [2024-12-09 04:12:00.132470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.655 [2024-12-09 04:12:00.132476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.132487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9700) on tqpair=0x1287690 00:21:31.655 [2024-12-09 04:12:00.132502] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:31.655 [2024-12-09 04:12:00.132525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:31.655 [2024-12-09 04:12:00.132544] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:31.655 [2024-12-09 04:12:00.132557] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.132565] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287690) 00:21:31.655 [2024-12-09 04:12:00.132576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.655 [2024-12-09 04:12:00.132598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9700, cid 4, qid 0 00:21:31.655 [2024-12-09 04:12:00.132734] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:31.655 [2024-12-09 04:12:00.132749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:31.655 [2024-12-09 04:12:00.132756] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.132763] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287690): datao=0, datal=4096, cccid=4 00:21:31.655 [2024-12-09 04:12:00.132770] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e9700) on tqpair(0x1287690): expected_datao=0, payload_size=4096 00:21:31.655 [2024-12-09 04:12:00.132778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.132788] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.132795] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.132807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.655 [2024-12-09 04:12:00.132817] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.655 [2024-12-09 04:12:00.132824] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.132831] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9700) on tqpair=0x1287690 00:21:31.655 [2024-12-09 04:12:00.132851] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:31.655 [2024-12-09 04:12:00.132869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:31.655 [2024-12-09 04:12:00.132884] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.132892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287690) 00:21:31.655 [2024-12-09 04:12:00.132903] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.655 [2024-12-09 04:12:00.132925] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9700, cid 4, qid 0 00:21:31.655 [2024-12-09 04:12:00.133060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:31.655 [2024-12-09 04:12:00.133074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:31.655 [2024-12-09 04:12:00.133081] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.133088] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287690): datao=0, datal=4096, cccid=4 00:21:31.655 [2024-12-09 04:12:00.133096] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e9700) on tqpair(0x1287690): expected_datao=0, payload_size=4096 00:21:31.655 [2024-12-09 04:12:00.133103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.133114] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.133125] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.133153] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.655 [2024-12-09 04:12:00.133163] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.655 [2024-12-09 04:12:00.133170] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.133177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9700) on tqpair=0x1287690 00:21:31.655 [2024-12-09 04:12:00.133189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:31.655 [2024-12-09 04:12:00.133220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:31.655 [2024-12-09 04:12:00.133235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:31.655 [2024-12-09 04:12:00.133249] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:31.655 [2024-12-09 04:12:00.133259] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:31.655 [2024-12-09 04:12:00.133268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:31.655 [2024-12-09 04:12:00.137294] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:31.655 [2024-12-09 04:12:00.137305] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:31.655 [2024-12-09 04:12:00.137314] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:31.655 [2024-12-09 04:12:00.137333] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.137341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287690) 00:21:31.655 [2024-12-09 04:12:00.137352] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.655 [2024-12-09 04:12:00.137363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.137370] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.137376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1287690) 00:21:31.655 [2024-12-09 04:12:00.137385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.655 [2024-12-09 04:12:00.137412] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9700, cid 4, qid 0 00:21:31.655 [2024-12-09 04:12:00.137440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9880, cid 5, qid 0 00:21:31.655 [2024-12-09 04:12:00.137567] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.655 [2024-12-09 04:12:00.137582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.655 [2024-12-09 04:12:00.137589] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.137596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9700) on tqpair=0x1287690 00:21:31.655 [2024-12-09 04:12:00.137606] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.655 [2024-12-09 04:12:00.137615] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.655 [2024-12-09 04:12:00.137622] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.137628] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9880) on tqpair=0x1287690 00:21:31.655 [2024-12-09 04:12:00.137644] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.137653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1287690) 00:21:31.655 [2024-12-09 04:12:00.137667] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.655 [2024-12-09 04:12:00.137690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9880, cid 5, qid 0 00:21:31.655 [2024-12-09 04:12:00.137817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.655 [2024-12-09 04:12:00.137831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.655 [2024-12-09 04:12:00.137838] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.137845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9880) on tqpair=0x1287690 00:21:31.655 [2024-12-09 04:12:00.137860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.137869] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1287690) 00:21:31.655 [2024-12-09 04:12:00.137880] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.655 [2024-12-09 04:12:00.137901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9880, cid 5, qid 0 00:21:31.655 [2024-12-09 04:12:00.138044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.655 [2024-12-09 04:12:00.138058] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.655 [2024-12-09 04:12:00.138065] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.138072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9880) on tqpair=0x1287690 00:21:31.655 [2024-12-09 04:12:00.138088] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.655 [2024-12-09 04:12:00.138098] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1287690) 00:21:31.655 [2024-12-09 04:12:00.138108] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.655 [2024-12-09 04:12:00.138130] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9880, cid 5, qid 0 00:21:31.655 [2024-12-09 04:12:00.138211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.656 [2024-12-09 04:12:00.138224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.656 [2024-12-09 04:12:00.138231] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138238] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9880) on tqpair=0x1287690 00:21:31.656 [2024-12-09 04:12:00.138266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1287690) 00:21:31.656 [2024-12-09 04:12:00.138297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.656 [2024-12-09 04:12:00.138310] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138318] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1287690) 00:21:31.656 [2024-12-09 04:12:00.138328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.656 [2024-12-09 04:12:00.138340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1287690) 00:21:31.656 [2024-12-09 04:12:00.138357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.656 [2024-12-09 04:12:00.138369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138377] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1287690) 00:21:31.656 [2024-12-09 04:12:00.138391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.656 [2024-12-09 04:12:00.138415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9880, cid 5, qid 0 00:21:31.656 [2024-12-09 04:12:00.138426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9700, cid 4, qid 0 00:21:31.656 [2024-12-09 04:12:00.138434] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9a00, cid 6, qid 0 00:21:31.656 [2024-12-09 04:12:00.138442] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9b80, cid 7, qid 0 00:21:31.656 [2024-12-09 04:12:00.138622] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:31.656 [2024-12-09 04:12:00.138637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:31.656 [2024-12-09 04:12:00.138644] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138651] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287690): datao=0, datal=8192, cccid=5 00:21:31.656 [2024-12-09 04:12:00.138659] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e9880) on tqpair(0x1287690): expected_datao=0, payload_size=8192 00:21:31.656 [2024-12-09 04:12:00.138667] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138677] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138685] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138694] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:31.656 [2024-12-09 04:12:00.138703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:31.656 [2024-12-09 04:12:00.138709] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138716] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287690): datao=0, datal=512, cccid=4 00:21:31.656 [2024-12-09 04:12:00.138723] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e9700) on tqpair(0x1287690): expected_datao=0, payload_size=512 00:21:31.656 [2024-12-09 04:12:00.138731] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138740] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138747] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:31.656 [2024-12-09 04:12:00.138764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:31.656 [2024-12-09 04:12:00.138771] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138777] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287690): datao=0, datal=512, cccid=6 00:21:31.656 [2024-12-09 04:12:00.138784] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e9a00) on tqpair(0x1287690): expected_datao=0, payload_size=512 00:21:31.656 [2024-12-09 04:12:00.138792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138801] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138808] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:31.656 [2024-12-09 04:12:00.138825] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:31.656 [2024-12-09 04:12:00.138832] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138838] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1287690): datao=0, datal=4096, cccid=7 00:21:31.656 [2024-12-09 04:12:00.138846] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12e9b80) on tqpair(0x1287690): expected_datao=0, payload_size=4096 00:21:31.656 [2024-12-09 04:12:00.138853] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138863] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138870] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.656 [2024-12-09 04:12:00.138897] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.656 [2024-12-09 04:12:00.138904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138911] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9880) on tqpair=0x1287690 00:21:31.656 [2024-12-09 04:12:00.138930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.656 [2024-12-09 04:12:00.138941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.656 [2024-12-09 04:12:00.138948] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9700) on tqpair=0x1287690 00:21:31.656 [2024-12-09 04:12:00.138971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.656 [2024-12-09 04:12:00.138982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.656 [2024-12-09 04:12:00.138989] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.138996] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9a00) on tqpair=0x1287690 00:21:31.656 [2024-12-09 04:12:00.139007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.656 [2024-12-09 04:12:00.139031] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.656 [2024-12-09 04:12:00.139038] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.656 [2024-12-09 04:12:00.139045] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9b80) on tqpair=0x1287690 00:21:31.656 ===================================================== 00:21:31.656 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:31.656 ===================================================== 00:21:31.656 Controller Capabilities/Features 00:21:31.656 ================================ 00:21:31.656 Vendor ID: 8086 00:21:31.656 Subsystem Vendor ID: 8086 00:21:31.656 Serial Number: SPDK00000000000001 00:21:31.656 Model Number: SPDK bdev Controller 00:21:31.656 Firmware Version: 25.01 00:21:31.656 Recommended Arb Burst: 6 00:21:31.656 IEEE OUI Identifier: e4 d2 5c 00:21:31.656 Multi-path I/O 00:21:31.656 May have multiple subsystem ports: Yes 00:21:31.656 May have multiple controllers: Yes 00:21:31.656 Associated with SR-IOV VF: No 00:21:31.656 Max Data Transfer Size: 131072 00:21:31.656 Max Number of Namespaces: 32 00:21:31.656 Max Number of I/O Queues: 127 00:21:31.656 NVMe Specification Version (VS): 1.3 00:21:31.656 NVMe Specification Version (Identify): 1.3 00:21:31.656 Maximum Queue Entries: 128 00:21:31.656 Contiguous Queues Required: Yes 00:21:31.656 Arbitration Mechanisms Supported 00:21:31.656 Weighted Round Robin: Not Supported 00:21:31.656 Vendor Specific: Not Supported 00:21:31.656 Reset Timeout: 15000 ms 00:21:31.656 Doorbell Stride: 4 bytes 00:21:31.656 NVM Subsystem Reset: Not Supported 00:21:31.656 Command Sets Supported 00:21:31.656 NVM Command Set: Supported 00:21:31.656 Boot Partition: Not Supported 00:21:31.656 Memory Page Size Minimum: 4096 bytes 00:21:31.656 Memory Page Size Maximum: 4096 bytes 00:21:31.656 Persistent Memory Region: Not Supported 00:21:31.656 Optional Asynchronous Events Supported 00:21:31.656 Namespace Attribute Notices: Supported 00:21:31.656 Firmware Activation Notices: Not Supported 00:21:31.656 ANA Change Notices: Not Supported 00:21:31.656 PLE Aggregate Log Change Notices: Not Supported 00:21:31.656 LBA Status Info Alert Notices: Not Supported 00:21:31.656 EGE Aggregate Log Change Notices: Not Supported 00:21:31.656 Normal NVM Subsystem Shutdown event: Not Supported 00:21:31.656 Zone Descriptor Change Notices: Not Supported 00:21:31.656 Discovery Log Change Notices: Not Supported 00:21:31.656 Controller Attributes 00:21:31.656 128-bit Host Identifier: Supported 00:21:31.656 Non-Operational Permissive Mode: Not Supported 00:21:31.656 NVM Sets: Not Supported 00:21:31.656 Read Recovery Levels: Not Supported 00:21:31.656 Endurance Groups: Not Supported 00:21:31.656 Predictable Latency Mode: Not Supported 00:21:31.656 Traffic Based Keep ALive: Not Supported 00:21:31.656 Namespace Granularity: Not Supported 00:21:31.656 SQ Associations: Not Supported 00:21:31.656 UUID List: Not Supported 00:21:31.656 Multi-Domain Subsystem: Not Supported 00:21:31.656 Fixed Capacity Management: Not Supported 00:21:31.656 Variable Capacity Management: Not Supported 00:21:31.656 Delete Endurance Group: Not Supported 00:21:31.656 Delete NVM Set: Not Supported 00:21:31.656 Extended LBA Formats Supported: Not Supported 00:21:31.656 Flexible Data Placement Supported: Not Supported 00:21:31.656 00:21:31.656 Controller Memory Buffer Support 00:21:31.656 ================================ 00:21:31.656 Supported: No 00:21:31.656 00:21:31.656 Persistent Memory Region Support 00:21:31.657 ================================ 00:21:31.657 Supported: No 00:21:31.657 00:21:31.657 Admin Command Set Attributes 00:21:31.657 ============================ 00:21:31.657 Security Send/Receive: Not Supported 00:21:31.657 Format NVM: Not Supported 00:21:31.657 Firmware Activate/Download: Not Supported 00:21:31.657 Namespace Management: Not Supported 00:21:31.657 Device Self-Test: Not Supported 00:21:31.657 Directives: Not Supported 00:21:31.657 NVMe-MI: Not Supported 00:21:31.657 Virtualization Management: Not Supported 00:21:31.657 Doorbell Buffer Config: Not Supported 00:21:31.657 Get LBA Status Capability: Not Supported 00:21:31.657 Command & Feature Lockdown Capability: Not Supported 00:21:31.657 Abort Command Limit: 4 00:21:31.657 Async Event Request Limit: 4 00:21:31.657 Number of Firmware Slots: N/A 00:21:31.657 Firmware Slot 1 Read-Only: N/A 00:21:31.657 Firmware Activation Without Reset: N/A 00:21:31.657 Multiple Update Detection Support: N/A 00:21:31.657 Firmware Update Granularity: No Information Provided 00:21:31.657 Per-Namespace SMART Log: No 00:21:31.657 Asymmetric Namespace Access Log Page: Not Supported 00:21:31.657 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:31.657 Command Effects Log Page: Supported 00:21:31.657 Get Log Page Extended Data: Supported 00:21:31.657 Telemetry Log Pages: Not Supported 00:21:31.657 Persistent Event Log Pages: Not Supported 00:21:31.657 Supported Log Pages Log Page: May Support 00:21:31.657 Commands Supported & Effects Log Page: Not Supported 00:21:31.657 Feature Identifiers & Effects Log Page:May Support 00:21:31.657 NVMe-MI Commands & Effects Log Page: May Support 00:21:31.657 Data Area 4 for Telemetry Log: Not Supported 00:21:31.657 Error Log Page Entries Supported: 128 00:21:31.657 Keep Alive: Supported 00:21:31.657 Keep Alive Granularity: 10000 ms 00:21:31.657 00:21:31.657 NVM Command Set Attributes 00:21:31.657 ========================== 00:21:31.657 Submission Queue Entry Size 00:21:31.657 Max: 64 00:21:31.657 Min: 64 00:21:31.657 Completion Queue Entry Size 00:21:31.657 Max: 16 00:21:31.657 Min: 16 00:21:31.657 Number of Namespaces: 32 00:21:31.657 Compare Command: Supported 00:21:31.657 Write Uncorrectable Command: Not Supported 00:21:31.657 Dataset Management Command: Supported 00:21:31.657 Write Zeroes Command: Supported 00:21:31.657 Set Features Save Field: Not Supported 00:21:31.657 Reservations: Supported 00:21:31.657 Timestamp: Not Supported 00:21:31.657 Copy: Supported 00:21:31.657 Volatile Write Cache: Present 00:21:31.657 Atomic Write Unit (Normal): 1 00:21:31.657 Atomic Write Unit (PFail): 1 00:21:31.657 Atomic Compare & Write Unit: 1 00:21:31.657 Fused Compare & Write: Supported 00:21:31.657 Scatter-Gather List 00:21:31.657 SGL Command Set: Supported 00:21:31.657 SGL Keyed: Supported 00:21:31.657 SGL Bit Bucket Descriptor: Not Supported 00:21:31.657 SGL Metadata Pointer: Not Supported 00:21:31.657 Oversized SGL: Not Supported 00:21:31.657 SGL Metadata Address: Not Supported 00:21:31.657 SGL Offset: Supported 00:21:31.657 Transport SGL Data Block: Not Supported 00:21:31.657 Replay Protected Memory Block: Not Supported 00:21:31.657 00:21:31.657 Firmware Slot Information 00:21:31.657 ========================= 00:21:31.657 Active slot: 1 00:21:31.657 Slot 1 Firmware Revision: 25.01 00:21:31.657 00:21:31.657 00:21:31.657 Commands Supported and Effects 00:21:31.657 ============================== 00:21:31.657 Admin Commands 00:21:31.657 -------------- 00:21:31.657 Get Log Page (02h): Supported 00:21:31.657 Identify (06h): Supported 00:21:31.657 Abort (08h): Supported 00:21:31.657 Set Features (09h): Supported 00:21:31.657 Get Features (0Ah): Supported 00:21:31.657 Asynchronous Event Request (0Ch): Supported 00:21:31.657 Keep Alive (18h): Supported 00:21:31.657 I/O Commands 00:21:31.657 ------------ 00:21:31.657 Flush (00h): Supported LBA-Change 00:21:31.657 Write (01h): Supported LBA-Change 00:21:31.657 Read (02h): Supported 00:21:31.657 Compare (05h): Supported 00:21:31.657 Write Zeroes (08h): Supported LBA-Change 00:21:31.657 Dataset Management (09h): Supported LBA-Change 00:21:31.657 Copy (19h): Supported LBA-Change 00:21:31.657 00:21:31.657 Error Log 00:21:31.657 ========= 00:21:31.657 00:21:31.657 Arbitration 00:21:31.657 =========== 00:21:31.657 Arbitration Burst: 1 00:21:31.657 00:21:31.657 Power Management 00:21:31.657 ================ 00:21:31.657 Number of Power States: 1 00:21:31.657 Current Power State: Power State #0 00:21:31.657 Power State #0: 00:21:31.657 Max Power: 0.00 W 00:21:31.657 Non-Operational State: Operational 00:21:31.657 Entry Latency: Not Reported 00:21:31.657 Exit Latency: Not Reported 00:21:31.657 Relative Read Throughput: 0 00:21:31.657 Relative Read Latency: 0 00:21:31.657 Relative Write Throughput: 0 00:21:31.657 Relative Write Latency: 0 00:21:31.657 Idle Power: Not Reported 00:21:31.657 Active Power: Not Reported 00:21:31.657 Non-Operational Permissive Mode: Not Supported 00:21:31.657 00:21:31.657 Health Information 00:21:31.657 ================== 00:21:31.657 Critical Warnings: 00:21:31.657 Available Spare Space: OK 00:21:31.657 Temperature: OK 00:21:31.657 Device Reliability: OK 00:21:31.657 Read Only: No 00:21:31.657 Volatile Memory Backup: OK 00:21:31.657 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:31.657 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:31.657 Available Spare: 0% 00:21:31.657 Available Spare Threshold: 0% 00:21:31.657 Life Percentage Used:[2024-12-09 04:12:00.139158] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.657 [2024-12-09 04:12:00.139170] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1287690) 00:21:31.657 [2024-12-09 04:12:00.139180] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.657 [2024-12-09 04:12:00.139202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9b80, cid 7, qid 0 00:21:31.657 [2024-12-09 04:12:00.139333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.657 [2024-12-09 04:12:00.139348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.657 [2024-12-09 04:12:00.139356] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.657 [2024-12-09 04:12:00.139363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9b80) on tqpair=0x1287690 00:21:31.657 [2024-12-09 04:12:00.139411] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:31.657 [2024-12-09 04:12:00.139431] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9100) on tqpair=0x1287690 00:21:31.657 [2024-12-09 04:12:00.139443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.657 [2024-12-09 04:12:00.139452] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9280) on tqpair=0x1287690 00:21:31.657 [2024-12-09 04:12:00.139460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.657 [2024-12-09 04:12:00.139468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9400) on tqpair=0x1287690 00:21:31.657 [2024-12-09 04:12:00.139476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.657 [2024-12-09 04:12:00.139484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:31.657 [2024-12-09 04:12:00.139492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.657 [2024-12-09 04:12:00.139504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.657 [2024-12-09 04:12:00.139512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.657 [2024-12-09 04:12:00.139519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:31.657 [2024-12-09 04:12:00.139533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.657 [2024-12-09 04:12:00.139557] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:31.657 [2024-12-09 04:12:00.139636] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.657 [2024-12-09 04:12:00.139650] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.657 [2024-12-09 04:12:00.139657] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.657 [2024-12-09 04:12:00.139664] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:31.657 [2024-12-09 04:12:00.139676] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.657 [2024-12-09 04:12:00.139684] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.657 [2024-12-09 04:12:00.139690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:31.657 [2024-12-09 04:12:00.139701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.657 [2024-12-09 04:12:00.139727] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:31.657 [2024-12-09 04:12:00.139836] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.657 [2024-12-09 04:12:00.139850] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.657 [2024-12-09 04:12:00.139857] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.657 [2024-12-09 04:12:00.139864] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:31.657 [2024-12-09 04:12:00.139872] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:31.658 [2024-12-09 04:12:00.139880] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:31.658 [2024-12-09 04:12:00.139896] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.139905] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.139912] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:31.658 [2024-12-09 04:12:00.139923] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.658 [2024-12-09 04:12:00.139944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:31.658 [2024-12-09 04:12:00.140038] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.658 [2024-12-09 04:12:00.140050] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.658 [2024-12-09 04:12:00.140057] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.140064] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:31.658 [2024-12-09 04:12:00.140080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.140089] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.140096] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:31.658 [2024-12-09 04:12:00.140106] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.658 [2024-12-09 04:12:00.140127] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:31.658 [2024-12-09 04:12:00.140201] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.658 [2024-12-09 04:12:00.140214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.658 [2024-12-09 04:12:00.140222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.140228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:31.658 [2024-12-09 04:12:00.140245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.140258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.140265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:31.658 [2024-12-09 04:12:00.140284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.658 [2024-12-09 04:12:00.140307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:31.658 [2024-12-09 04:12:00.140386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.658 [2024-12-09 04:12:00.140400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.658 [2024-12-09 04:12:00.140407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.140414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:31.658 [2024-12-09 04:12:00.140430] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.140440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.140447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:31.658 [2024-12-09 04:12:00.140457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.658 [2024-12-09 04:12:00.140478] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:31.658 [2024-12-09 04:12:00.140584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.658 [2024-12-09 04:12:00.140596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.658 [2024-12-09 04:12:00.140603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.140610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:31.658 [2024-12-09 04:12:00.140626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.140636] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.140642] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:31.658 [2024-12-09 04:12:00.140652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.658 [2024-12-09 04:12:00.140673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:31.658 [2024-12-09 04:12:00.140747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.658 [2024-12-09 04:12:00.140759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.658 [2024-12-09 04:12:00.140766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.140773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:31.658 [2024-12-09 04:12:00.140788] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.140798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.140804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:31.658 [2024-12-09 04:12:00.140814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.658 [2024-12-09 04:12:00.140835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:31.658 [2024-12-09 04:12:00.140909] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.658 [2024-12-09 04:12:00.140923] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.658 [2024-12-09 04:12:00.140930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.140937] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:31.658 [2024-12-09 04:12:00.140953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.140963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.140974] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:31.658 [2024-12-09 04:12:00.140985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.658 [2024-12-09 04:12:00.141006] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:31.658 [2024-12-09 04:12:00.141079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.658 [2024-12-09 04:12:00.141092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.658 [2024-12-09 04:12:00.141099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.141105] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:31.658 [2024-12-09 04:12:00.141121] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.141130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.141137] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:31.658 [2024-12-09 04:12:00.141147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.658 [2024-12-09 04:12:00.141168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:31.658 [2024-12-09 04:12:00.141240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.658 [2024-12-09 04:12:00.141252] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.658 [2024-12-09 04:12:00.141259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.141266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:31.658 [2024-12-09 04:12:00.145295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.145308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.145315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1287690) 00:21:31.658 [2024-12-09 04:12:00.145341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:31.658 [2024-12-09 04:12:00.145364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12e9580, cid 3, qid 0 00:21:31.658 [2024-12-09 04:12:00.145492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:31.658 [2024-12-09 04:12:00.145504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:31.658 [2024-12-09 04:12:00.145512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:31.658 [2024-12-09 04:12:00.145519] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12e9580) on tqpair=0x1287690 00:21:31.658 [2024-12-09 04:12:00.145532] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:21:31.658 0% 00:21:31.658 Data Units Read: 0 00:21:31.658 Data Units Written: 0 00:21:31.658 Host Read Commands: 0 00:21:31.658 Host Write Commands: 0 00:21:31.658 Controller Busy Time: 0 minutes 00:21:31.658 Power Cycles: 0 00:21:31.658 Power On Hours: 0 hours 00:21:31.658 Unsafe Shutdowns: 0 00:21:31.658 Unrecoverable Media Errors: 0 00:21:31.658 Lifetime Error Log Entries: 0 00:21:31.658 Warning Temperature Time: 0 minutes 00:21:31.658 Critical Temperature Time: 0 minutes 00:21:31.658 00:21:31.658 Number of Queues 00:21:31.658 ================ 00:21:31.658 Number of I/O Submission Queues: 127 00:21:31.658 Number of I/O Completion Queues: 127 00:21:31.658 00:21:31.658 Active Namespaces 00:21:31.658 ================= 00:21:31.658 Namespace ID:1 00:21:31.658 Error Recovery Timeout: Unlimited 00:21:31.658 Command Set Identifier: NVM (00h) 00:21:31.658 Deallocate: Supported 00:21:31.658 Deallocated/Unwritten Error: Not Supported 00:21:31.658 Deallocated Read Value: Unknown 00:21:31.658 Deallocate in Write Zeroes: Not Supported 00:21:31.658 Deallocated Guard Field: 0xFFFF 00:21:31.658 Flush: Supported 00:21:31.658 Reservation: Supported 00:21:31.658 Namespace Sharing Capabilities: Multiple Controllers 00:21:31.658 Size (in LBAs): 131072 (0GiB) 00:21:31.658 Capacity (in LBAs): 131072 (0GiB) 00:21:31.658 Utilization (in LBAs): 131072 (0GiB) 00:21:31.658 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:31.658 EUI64: ABCDEF0123456789 00:21:31.658 UUID: c1f55b3a-c777-461f-8bb4-17aef1175c5a 00:21:31.658 Thin Provisioning: Not Supported 00:21:31.658 Per-NS Atomic Units: Yes 00:21:31.658 Atomic Boundary Size (Normal): 0 00:21:31.658 Atomic Boundary Size (PFail): 0 00:21:31.658 Atomic Boundary Offset: 0 00:21:31.658 Maximum Single Source Range Length: 65535 00:21:31.658 Maximum Copy Length: 65535 00:21:31.658 Maximum Source Range Count: 1 00:21:31.658 NGUID/EUI64 Never Reused: No 00:21:31.659 Namespace Write Protected: No 00:21:31.659 Number of LBA Formats: 1 00:21:31.659 Current LBA Format: LBA Format #00 00:21:31.659 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:31.659 00:21:31.659 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:31.659 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:31.659 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.659 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:31.659 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.659 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:31.659 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:31.659 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:31.659 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:21:31.659 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:31.659 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:21:31.659 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:31.659 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:31.659 rmmod nvme_tcp 00:21:31.659 rmmod nvme_fabrics 00:21:31.659 rmmod nvme_keyring 00:21:31.916 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:31.917 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:21:31.917 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:21:31.917 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 292082 ']' 00:21:31.917 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 292082 00:21:31.917 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 292082 ']' 00:21:31.917 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 292082 00:21:31.917 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:21:31.917 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.917 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 292082 00:21:31.917 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:31.917 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:31.917 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 292082' 00:21:31.917 killing process with pid 292082 00:21:31.917 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 292082 00:21:31.917 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 292082 00:21:32.174 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:32.174 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:32.174 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:32.174 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:21:32.174 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:21:32.174 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:32.174 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:21:32.174 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:32.174 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:32.174 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:32.174 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:32.174 04:12:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.098 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:34.098 00:21:34.098 real 0m5.677s 00:21:34.098 user 0m4.715s 00:21:34.098 sys 0m2.051s 00:21:34.098 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:34.098 04:12:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:34.098 ************************************ 00:21:34.098 END TEST nvmf_identify 00:21:34.098 ************************************ 00:21:34.098 04:12:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:34.098 04:12:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:34.098 04:12:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:34.098 04:12:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:34.098 ************************************ 00:21:34.098 START TEST nvmf_perf 00:21:34.098 ************************************ 00:21:34.098 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:34.383 * Looking for test storage... 00:21:34.383 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:34.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.383 --rc genhtml_branch_coverage=1 00:21:34.383 --rc genhtml_function_coverage=1 00:21:34.383 --rc genhtml_legend=1 00:21:34.383 --rc geninfo_all_blocks=1 00:21:34.383 --rc geninfo_unexecuted_blocks=1 00:21:34.383 00:21:34.383 ' 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:34.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.383 --rc genhtml_branch_coverage=1 00:21:34.383 --rc genhtml_function_coverage=1 00:21:34.383 --rc genhtml_legend=1 00:21:34.383 --rc geninfo_all_blocks=1 00:21:34.383 --rc geninfo_unexecuted_blocks=1 00:21:34.383 00:21:34.383 ' 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:34.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.383 --rc genhtml_branch_coverage=1 00:21:34.383 --rc genhtml_function_coverage=1 00:21:34.383 --rc genhtml_legend=1 00:21:34.383 --rc geninfo_all_blocks=1 00:21:34.383 --rc geninfo_unexecuted_blocks=1 00:21:34.383 00:21:34.383 ' 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:34.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.383 --rc genhtml_branch_coverage=1 00:21:34.383 --rc genhtml_function_coverage=1 00:21:34.383 --rc genhtml_legend=1 00:21:34.383 --rc geninfo_all_blocks=1 00:21:34.383 --rc geninfo_unexecuted_blocks=1 00:21:34.383 00:21:34.383 ' 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:34.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:34.383 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:34.384 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:34.384 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:34.384 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:34.384 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:34.384 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:34.384 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:34.384 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.384 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:34.384 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:34.384 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:34.384 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.384 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.384 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.384 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:34.384 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:34.384 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:34.384 04:12:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:36.639 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:36.639 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:36.639 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:36.639 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:36.639 04:12:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:36.639 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:36.639 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:36.639 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:36.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:21:36.640 00:21:36.640 --- 10.0.0.2 ping statistics --- 00:21:36.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.640 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:36.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:21:36.640 00:21:36.640 --- 10.0.0.1 ping statistics --- 00:21:36.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.640 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=294298 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 294298 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 294298 ']' 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.640 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:36.896 [2024-12-09 04:12:05.262597] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:21:36.896 [2024-12-09 04:12:05.262666] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.896 [2024-12-09 04:12:05.334949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:36.896 [2024-12-09 04:12:05.398066] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.896 [2024-12-09 04:12:05.398147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.896 [2024-12-09 04:12:05.398162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.896 [2024-12-09 04:12:05.398173] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.896 [2024-12-09 04:12:05.398183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.896 [2024-12-09 04:12:05.399887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.896 [2024-12-09 04:12:05.399917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.896 [2024-12-09 04:12:05.399945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:36.896 [2024-12-09 04:12:05.399948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.154 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:37.154 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:21:37.154 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:37.154 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:37.154 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:37.154 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:37.154 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:37.154 04:12:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:40.430 04:12:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:40.430 04:12:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:40.430 04:12:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:21:40.430 04:12:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:40.995 04:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:40.995 04:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:21:40.995 04:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:40.995 04:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:40.995 04:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:40.995 [2024-12-09 04:12:09.538652] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.995 04:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:41.253 04:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:41.253 04:12:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:41.819 04:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:41.819 04:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:41.819 04:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:42.076 [2024-12-09 04:12:10.626722] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.077 04:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:42.335 04:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:21:42.335 04:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:21:42.335 04:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:42.335 04:12:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:21:43.707 Initializing NVMe Controllers 00:21:43.707 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:21:43.707 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:21:43.707 Initialization complete. Launching workers. 00:21:43.707 ======================================================== 00:21:43.707 Latency(us) 00:21:43.707 Device Information : IOPS MiB/s Average min max 00:21:43.707 PCIE (0000:88:00.0) NSID 1 from core 0: 85336.30 333.34 374.41 38.59 5291.06 00:21:43.707 ======================================================== 00:21:43.707 Total : 85336.30 333.34 374.41 38.59 5291.06 00:21:43.707 00:21:43.707 04:12:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:45.080 Initializing NVMe Controllers 00:21:45.080 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:45.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:45.080 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:45.080 Initialization complete. Launching workers. 00:21:45.080 ======================================================== 00:21:45.080 Latency(us) 00:21:45.080 Device Information : IOPS MiB/s Average min max 00:21:45.080 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 102.00 0.40 10161.17 149.18 46025.81 00:21:45.080 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 40.00 0.16 25910.70 7948.17 47908.22 00:21:45.080 ======================================================== 00:21:45.080 Total : 142.00 0.55 14597.66 149.18 47908.22 00:21:45.080 00:21:45.080 04:12:13 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:46.453 Initializing NVMe Controllers 00:21:46.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:46.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:46.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:46.453 Initialization complete. Launching workers. 00:21:46.453 ======================================================== 00:21:46.453 Latency(us) 00:21:46.453 Device Information : IOPS MiB/s Average min max 00:21:46.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7944.00 31.03 4030.29 662.58 10744.78 00:21:46.453 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3711.00 14.50 8668.30 5154.15 18983.63 00:21:46.453 ======================================================== 00:21:46.453 Total : 11655.00 45.53 5507.05 662.58 18983.63 00:21:46.453 00:21:46.453 04:12:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:46.453 04:12:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:46.453 04:12:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:48.981 Initializing NVMe Controllers 00:21:48.981 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:48.981 Controller IO queue size 128, less than required. 00:21:48.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:48.981 Controller IO queue size 128, less than required. 00:21:48.981 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:48.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:48.981 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:48.981 Initialization complete. Launching workers. 00:21:48.981 ======================================================== 00:21:48.981 Latency(us) 00:21:48.981 Device Information : IOPS MiB/s Average min max 00:21:48.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1720.41 430.10 75282.08 53986.19 127395.59 00:21:48.981 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 543.47 135.87 245374.39 97098.29 390437.69 00:21:48.981 ======================================================== 00:21:48.981 Total : 2263.88 565.97 116114.75 53986.19 390437.69 00:21:48.981 00:21:48.981 04:12:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:49.238 No valid NVMe controllers or AIO or URING devices found 00:21:49.238 Initializing NVMe Controllers 00:21:49.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:49.238 Controller IO queue size 128, less than required. 00:21:49.238 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:49.238 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:49.238 Controller IO queue size 128, less than required. 00:21:49.238 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:49.238 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:49.238 WARNING: Some requested NVMe devices were skipped 00:21:49.238 04:12:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:51.779 Initializing NVMe Controllers 00:21:51.780 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:51.780 Controller IO queue size 128, less than required. 00:21:51.780 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:51.780 Controller IO queue size 128, less than required. 00:21:51.780 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:51.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:51.780 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:51.780 Initialization complete. Launching workers. 00:21:51.780 00:21:51.780 ==================== 00:21:51.780 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:51.780 TCP transport: 00:21:51.780 polls: 10029 00:21:51.780 idle_polls: 6645 00:21:51.780 sock_completions: 3384 00:21:51.780 nvme_completions: 6125 00:21:51.780 submitted_requests: 9114 00:21:51.780 queued_requests: 1 00:21:51.780 00:21:51.780 ==================== 00:21:51.780 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:51.780 TCP transport: 00:21:51.780 polls: 10131 00:21:51.780 idle_polls: 6828 00:21:51.780 sock_completions: 3303 00:21:51.780 nvme_completions: 5943 00:21:51.780 submitted_requests: 8932 00:21:51.780 queued_requests: 1 00:21:51.780 ======================================================== 00:21:51.780 Latency(us) 00:21:51.780 Device Information : IOPS MiB/s Average min max 00:21:51.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1529.39 382.35 85812.46 57883.57 150710.40 00:21:51.780 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1483.94 370.98 87339.88 41997.74 137577.14 00:21:51.780 ======================================================== 00:21:51.780 Total : 3013.32 753.33 86564.65 41997.74 150710.40 00:21:51.780 00:21:52.037 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:52.037 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:52.295 rmmod nvme_tcp 00:21:52.295 rmmod nvme_fabrics 00:21:52.295 rmmod nvme_keyring 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 294298 ']' 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 294298 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 294298 ']' 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 294298 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 294298 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 294298' 00:21:52.295 killing process with pid 294298 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 294298 00:21:52.295 04:12:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 294298 00:21:54.196 04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:54.196 04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:54.196 04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:54.196 04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:21:54.196 04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:21:54.196 04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:54.196 04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:21:54.196 04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:54.196 04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:54.196 04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.196 04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.196 04:12:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:56.104 00:21:56.104 real 0m21.783s 00:21:56.104 user 1m6.398s 00:21:56.104 sys 0m5.816s 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:56.104 ************************************ 00:21:56.104 END TEST nvmf_perf 00:21:56.104 ************************************ 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:56.104 ************************************ 00:21:56.104 START TEST nvmf_fio_host 00:21:56.104 ************************************ 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:56.104 * Looking for test storage... 00:21:56.104 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:56.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.104 --rc genhtml_branch_coverage=1 00:21:56.104 --rc genhtml_function_coverage=1 00:21:56.104 --rc genhtml_legend=1 00:21:56.104 --rc geninfo_all_blocks=1 00:21:56.104 --rc geninfo_unexecuted_blocks=1 00:21:56.104 00:21:56.104 ' 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:56.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.104 --rc genhtml_branch_coverage=1 00:21:56.104 --rc genhtml_function_coverage=1 00:21:56.104 --rc genhtml_legend=1 00:21:56.104 --rc geninfo_all_blocks=1 00:21:56.104 --rc geninfo_unexecuted_blocks=1 00:21:56.104 00:21:56.104 ' 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:56.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.104 --rc genhtml_branch_coverage=1 00:21:56.104 --rc genhtml_function_coverage=1 00:21:56.104 --rc genhtml_legend=1 00:21:56.104 --rc geninfo_all_blocks=1 00:21:56.104 --rc geninfo_unexecuted_blocks=1 00:21:56.104 00:21:56.104 ' 00:21:56.104 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:56.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:56.104 --rc genhtml_branch_coverage=1 00:21:56.104 --rc genhtml_function_coverage=1 00:21:56.104 --rc genhtml_legend=1 00:21:56.104 --rc geninfo_all_blocks=1 00:21:56.104 --rc geninfo_unexecuted_blocks=1 00:21:56.104 00:21:56.104 ' 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:56.105 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:21:56.105 04:12:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:58.664 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:58.664 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:58.664 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:58.665 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:58.665 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:58.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:58.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:21:58.665 00:21:58.665 --- 10.0.0.2 ping statistics --- 00:21:58.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.665 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:58.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:58.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:21:58.665 00:21:58.665 --- 10.0.0.1 ping statistics --- 00:21:58.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:58.665 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=298779 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 298779 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 298779 ']' 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:58.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.665 04:12:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:58.665 [2024-12-09 04:12:26.988758] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:21:58.665 [2024-12-09 04:12:26.988852] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:58.665 [2024-12-09 04:12:27.060991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:58.665 [2024-12-09 04:12:27.118674] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.665 [2024-12-09 04:12:27.118727] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.665 [2024-12-09 04:12:27.118755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:58.665 [2024-12-09 04:12:27.118766] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:58.665 [2024-12-09 04:12:27.118775] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.665 [2024-12-09 04:12:27.120454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.665 [2024-12-09 04:12:27.120511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.665 [2024-12-09 04:12:27.120583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.665 [2024-12-09 04:12:27.120587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.665 04:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.665 04:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:21:58.665 04:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:58.923 [2024-12-09 04:12:27.499490] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.180 04:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:59.180 04:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:59.180 04:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:59.180 04:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:59.437 Malloc1 00:21:59.437 04:12:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:59.695 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:59.952 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.208 [2024-12-09 04:12:28.636517] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.208 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:00.465 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:00.465 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:00.465 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:00.465 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:00.465 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:00.465 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:00.465 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:00.465 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:00.465 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:00.465 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:00.465 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:00.465 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:00.465 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:00.465 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:00.466 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:00.466 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:00.466 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:00.466 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:00.466 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:00.466 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:00.466 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:00.466 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:00.466 04:12:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:00.722 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:00.722 fio-3.35 00:22:00.722 Starting 1 thread 00:22:03.249 00:22:03.249 test: (groupid=0, jobs=1): err= 0: pid=299141: Mon Dec 9 04:12:31 2024 00:22:03.249 read: IOPS=8763, BW=34.2MiB/s (35.9MB/s)(68.7MiB/2006msec) 00:22:03.249 slat (nsec): min=1948, max=223784, avg=2542.58, stdev=2248.04 00:22:03.249 clat (usec): min=2759, max=14211, avg=7965.60, stdev=681.63 00:22:03.249 lat (usec): min=2790, max=14213, avg=7968.14, stdev=681.49 00:22:03.249 clat percentiles (usec): 00:22:03.249 | 1.00th=[ 6456], 5.00th=[ 6915], 10.00th=[ 7177], 20.00th=[ 7439], 00:22:03.249 | 30.00th=[ 7635], 40.00th=[ 7832], 50.00th=[ 7963], 60.00th=[ 8160], 00:22:03.249 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8979], 00:22:03.249 | 99.00th=[ 9372], 99.50th=[ 9634], 99.90th=[13042], 99.95th=[13960], 00:22:03.249 | 99.99th=[14222] 00:22:03.249 bw ( KiB/s): min=33824, max=35672, per=99.94%, avg=35032.00, stdev=830.54, samples=4 00:22:03.249 iops : min= 8456, max= 8918, avg=8758.00, stdev=207.63, samples=4 00:22:03.249 write: IOPS=8770, BW=34.3MiB/s (35.9MB/s)(68.7MiB/2006msec); 0 zone resets 00:22:03.249 slat (usec): min=2, max=200, avg= 2.71, stdev= 1.89 00:22:03.249 clat (usec): min=1782, max=12422, avg=6571.45, stdev=559.19 00:22:03.249 lat (usec): min=1791, max=12424, avg=6574.16, stdev=559.11 00:22:03.249 clat percentiles (usec): 00:22:03.249 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 5932], 20.00th=[ 6128], 00:22:03.249 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6718], 00:22:03.249 | 70.00th=[ 6849], 80.00th=[ 6980], 90.00th=[ 7177], 95.00th=[ 7373], 00:22:03.249 | 99.00th=[ 7767], 99.50th=[ 7963], 99.90th=[10814], 99.95th=[11600], 00:22:03.249 | 99.99th=[12387] 00:22:03.249 bw ( KiB/s): min=34688, max=35456, per=99.94%, avg=35060.00, stdev=360.00, samples=4 00:22:03.249 iops : min= 8672, max= 8864, avg=8765.00, stdev=90.00, samples=4 00:22:03.249 lat (msec) : 2=0.01%, 4=0.13%, 10=99.63%, 20=0.23% 00:22:03.249 cpu : usr=63.69%, sys=34.76%, ctx=82, majf=0, minf=35 00:22:03.249 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:22:03.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:03.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:03.249 issued rwts: total=17579,17594,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:03.249 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:03.249 00:22:03.249 Run status group 0 (all jobs): 00:22:03.249 READ: bw=34.2MiB/s (35.9MB/s), 34.2MiB/s-34.2MiB/s (35.9MB/s-35.9MB/s), io=68.7MiB (72.0MB), run=2006-2006msec 00:22:03.249 WRITE: bw=34.3MiB/s (35.9MB/s), 34.3MiB/s-34.3MiB/s (35.9MB/s-35.9MB/s), io=68.7MiB (72.1MB), run=2006-2006msec 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:03.249 04:12:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:03.249 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:03.249 fio-3.35 00:22:03.249 Starting 1 thread 00:22:05.779 00:22:05.779 test: (groupid=0, jobs=1): err= 0: pid=299593: Mon Dec 9 04:12:34 2024 00:22:05.779 read: IOPS=7817, BW=122MiB/s (128MB/s)(246MiB/2010msec) 00:22:05.779 slat (usec): min=2, max=106, avg= 3.78, stdev= 2.00 00:22:05.779 clat (usec): min=2589, max=17475, avg=9239.99, stdev=2301.00 00:22:05.779 lat (usec): min=2594, max=17478, avg=9243.76, stdev=2301.03 00:22:05.779 clat percentiles (usec): 00:22:05.779 | 1.00th=[ 4883], 5.00th=[ 5800], 10.00th=[ 6456], 20.00th=[ 7308], 00:22:05.779 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9634], 00:22:05.779 | 70.00th=[10290], 80.00th=[11076], 90.00th=[12256], 95.00th=[13566], 00:22:05.779 | 99.00th=[15533], 99.50th=[16188], 99.90th=[16909], 99.95th=[17171], 00:22:05.779 | 99.99th=[17433] 00:22:05.779 bw ( KiB/s): min=62208, max=70400, per=52.74%, avg=65968.00, stdev=3869.53, samples=4 00:22:05.779 iops : min= 3888, max= 4400, avg=4123.00, stdev=241.85, samples=4 00:22:05.779 write: IOPS=4569, BW=71.4MiB/s (74.9MB/s)(134MiB/1879msec); 0 zone resets 00:22:05.779 slat (usec): min=30, max=145, avg=33.86, stdev= 5.63 00:22:05.779 clat (usec): min=5836, max=20594, avg=12278.94, stdev=2237.77 00:22:05.779 lat (usec): min=5872, max=20625, avg=12312.80, stdev=2237.85 00:22:05.779 clat percentiles (usec): 00:22:05.779 | 1.00th=[ 7963], 5.00th=[ 8979], 10.00th=[ 9503], 20.00th=[10290], 00:22:05.779 | 30.00th=[10945], 40.00th=[11469], 50.00th=[12125], 60.00th=[12780], 00:22:05.779 | 70.00th=[13566], 80.00th=[14222], 90.00th=[15270], 95.00th=[16188], 00:22:05.779 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19792], 99.95th=[20055], 00:22:05.779 | 99.99th=[20579] 00:22:05.779 bw ( KiB/s): min=64896, max=71296, per=93.18%, avg=68136.00, stdev=3404.19, samples=4 00:22:05.779 iops : min= 4056, max= 4456, avg=4258.50, stdev=212.76, samples=4 00:22:05.779 lat (msec) : 4=0.16%, 10=47.98%, 20=51.83%, 50=0.02% 00:22:05.779 cpu : usr=75.16%, sys=23.49%, ctx=45, majf=0, minf=58 00:22:05.779 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:05.779 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:05.779 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:05.779 issued rwts: total=15713,8587,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:05.779 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:05.779 00:22:05.779 Run status group 0 (all jobs): 00:22:05.779 READ: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=246MiB (257MB), run=2010-2010msec 00:22:05.779 WRITE: bw=71.4MiB/s (74.9MB/s), 71.4MiB/s-71.4MiB/s (74.9MB/s-74.9MB/s), io=134MiB (141MB), run=1879-1879msec 00:22:05.779 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:06.037 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:06.037 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:06.037 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:06.037 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:06.037 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:06.037 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:06.037 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:06.037 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:06.037 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:06.037 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:06.037 rmmod nvme_tcp 00:22:06.037 rmmod nvme_fabrics 00:22:06.037 rmmod nvme_keyring 00:22:06.037 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:06.037 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:06.037 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:06.037 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 298779 ']' 00:22:06.037 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 298779 00:22:06.037 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 298779 ']' 00:22:06.037 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 298779 00:22:06.037 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:06.037 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:06.038 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 298779 00:22:06.038 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:06.038 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:06.038 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 298779' 00:22:06.038 killing process with pid 298779 00:22:06.038 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 298779 00:22:06.038 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 298779 00:22:06.297 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:06.297 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:06.297 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:06.297 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:06.297 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:06.297 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:06.297 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:06.297 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:06.297 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:06.297 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:06.297 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:06.297 04:12:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.824 04:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:08.824 00:22:08.824 real 0m12.400s 00:22:08.824 user 0m36.415s 00:22:08.824 sys 0m4.166s 00:22:08.824 04:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:08.824 04:12:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.824 ************************************ 00:22:08.824 END TEST nvmf_fio_host 00:22:08.824 ************************************ 00:22:08.824 04:12:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:08.824 04:12:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:08.824 04:12:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:08.824 04:12:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:08.824 ************************************ 00:22:08.824 START TEST nvmf_failover 00:22:08.824 ************************************ 00:22:08.824 04:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:08.824 * Looking for test storage... 00:22:08.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:08.824 04:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:08.824 04:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:22:08.824 04:12:36 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:08.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.824 --rc genhtml_branch_coverage=1 00:22:08.824 --rc genhtml_function_coverage=1 00:22:08.824 --rc genhtml_legend=1 00:22:08.824 --rc geninfo_all_blocks=1 00:22:08.824 --rc geninfo_unexecuted_blocks=1 00:22:08.824 00:22:08.824 ' 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:08.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.824 --rc genhtml_branch_coverage=1 00:22:08.824 --rc genhtml_function_coverage=1 00:22:08.824 --rc genhtml_legend=1 00:22:08.824 --rc geninfo_all_blocks=1 00:22:08.824 --rc geninfo_unexecuted_blocks=1 00:22:08.824 00:22:08.824 ' 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:08.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.824 --rc genhtml_branch_coverage=1 00:22:08.824 --rc genhtml_function_coverage=1 00:22:08.824 --rc genhtml_legend=1 00:22:08.824 --rc geninfo_all_blocks=1 00:22:08.824 --rc geninfo_unexecuted_blocks=1 00:22:08.824 00:22:08.824 ' 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:08.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:08.824 --rc genhtml_branch_coverage=1 00:22:08.824 --rc genhtml_function_coverage=1 00:22:08.824 --rc genhtml_legend=1 00:22:08.824 --rc geninfo_all_blocks=1 00:22:08.824 --rc geninfo_unexecuted_blocks=1 00:22:08.824 00:22:08.824 ' 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:08.824 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:08.825 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:22:08.825 04:12:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:10.723 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:10.723 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:10.723 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:10.723 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:10.723 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:10.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:22:10.982 00:22:10.982 --- 10.0.0.2 ping statistics --- 00:22:10.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.982 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:10.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:22:10.982 00:22:10.982 --- 10.0.0.1 ping statistics --- 00:22:10.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.982 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=301801 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 301801 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 301801 ']' 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.982 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:10.982 [2024-12-09 04:12:39.467938] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:22:10.982 [2024-12-09 04:12:39.468015] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.982 [2024-12-09 04:12:39.539929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:11.241 [2024-12-09 04:12:39.598384] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.241 [2024-12-09 04:12:39.598441] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.241 [2024-12-09 04:12:39.598455] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.241 [2024-12-09 04:12:39.598466] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.241 [2024-12-09 04:12:39.598476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.241 [2024-12-09 04:12:39.599972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.241 [2024-12-09 04:12:39.600040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:11.241 [2024-12-09 04:12:39.600043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.241 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.241 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:11.241 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:11.241 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:11.241 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:11.241 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.241 04:12:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:11.499 [2024-12-09 04:12:40.046704] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.499 04:12:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:12.065 Malloc0 00:22:12.065 04:12:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:12.323 04:12:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:12.581 04:12:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:12.838 [2024-12-09 04:12:41.242433] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.838 04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:13.096 [2024-12-09 04:12:41.511186] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:13.096 04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:13.355 [2024-12-09 04:12:41.836108] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:13.355 04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=302091 00:22:13.355 04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:13.355 04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:13.355 04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 302091 /var/tmp/bdevperf.sock 00:22:13.355 04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 302091 ']' 00:22:13.355 04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:13.355 04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.355 04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:13.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:13.355 04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.355 04:12:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:13.613 04:12:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.613 04:12:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:13.613 04:12:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:14.179 NVMe0n1 00:22:14.179 04:12:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:14.437 00:22:14.437 04:12:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=302224 00:22:14.437 04:12:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:14.437 04:12:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:15.371 04:12:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:15.630 [2024-12-09 04:12:44.167954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168022] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 [2024-12-09 04:12:44.168857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8be00 is same with the state(6) to be set 00:22:15.630 04:12:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:18.909 04:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:19.167 00:22:19.167 04:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:19.432 [2024-12-09 04:12:47.866544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866897] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.866996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.867007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.867018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 [2024-12-09 04:12:47.867028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b8c8b0 is same with the state(6) to be set 00:22:19.432 04:12:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:22.715 04:12:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:22.715 [2024-12-09 04:12:51.151157] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:22.715 04:12:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:23.650 04:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:24.216 04:12:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 302224 00:22:29.479 { 00:22:29.479 "results": [ 00:22:29.479 { 00:22:29.479 "job": "NVMe0n1", 00:22:29.479 "core_mask": "0x1", 00:22:29.479 "workload": "verify", 00:22:29.479 "status": "finished", 00:22:29.479 "verify_range": { 00:22:29.479 "start": 0, 00:22:29.479 "length": 16384 00:22:29.479 }, 00:22:29.479 "queue_depth": 128, 00:22:29.479 "io_size": 4096, 00:22:29.479 "runtime": 15.042254, 00:22:29.479 "iops": 8276.818088565717, 00:22:29.479 "mibps": 32.33132065845983, 00:22:29.479 "io_failed": 10301, 00:22:29.479 "io_timeout": 0, 00:22:29.479 "avg_latency_us": 14218.11726537573, 00:22:29.479 "min_latency_us": 570.4059259259259, 00:22:29.479 "max_latency_us": 43690.666666666664 00:22:29.479 } 00:22:29.479 ], 00:22:29.479 "core_count": 1 00:22:29.479 } 00:22:29.479 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 302091 00:22:29.479 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 302091 ']' 00:22:29.479 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 302091 00:22:29.479 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:29.736 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:29.736 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 302091 00:22:29.736 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:29.736 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:29.736 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 302091' 00:22:29.736 killing process with pid 302091 00:22:29.736 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 302091 00:22:29.737 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 302091 00:22:30.006 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:30.006 [2024-12-09 04:12:41.903939] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:22:30.006 [2024-12-09 04:12:41.904040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid302091 ] 00:22:30.006 [2024-12-09 04:12:41.977639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.006 [2024-12-09 04:12:42.036205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.006 Running I/O for 15 seconds... 00:22:30.006 8590.00 IOPS, 33.55 MiB/s [2024-12-09T03:12:58.582Z] [2024-12-09 04:12:44.169770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.169809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.169835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.169850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.169867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.169881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.169897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.169910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.169925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.169938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.169953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.169967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.169982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.169996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.006 [2024-12-09 04:12:44.170799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.006 [2024-12-09 04:12:44.170820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.170834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.170848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.170861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.170875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.170888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.170907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.170921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.170935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.170948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.170962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.170975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.170990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.171007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.171035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.171064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.171091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.171118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.171145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.171173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.171200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.171228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.171296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.171338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.171366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.171396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.171424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.171453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.171481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.171511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.171541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.171593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.171621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.171648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.171675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.171707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.171735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.171762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.171789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.171821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.171848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.171877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.171904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.171932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.171959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.171988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.007 [2024-12-09 04:12:44.172167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.007 [2024-12-09 04:12:44.172953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.007 [2024-12-09 04:12:44.172968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.008 [2024-12-09 04:12:44.172981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.172996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.008 [2024-12-09 04:12:44.173016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.173031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.008 [2024-12-09 04:12:44.173044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.173059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.008 [2024-12-09 04:12:44.173072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.173087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.008 [2024-12-09 04:12:44.173101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.173116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.008 [2024-12-09 04:12:44.173129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.173144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.008 [2024-12-09 04:12:44.173157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.173172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.008 [2024-12-09 04:12:44.173186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.173217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.008 [2024-12-09 04:12:44.173233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84016 len:8 PRP1 0x0 PRP2 0x0 00:22:30.008 [2024-12-09 04:12:44.173246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.173298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.008 [2024-12-09 04:12:44.173313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.008 [2024-12-09 04:12:44.173324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84024 len:8 PRP1 0x0 PRP2 0x0 00:22:30.008 [2024-12-09 04:12:44.173337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.173355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.008 [2024-12-09 04:12:44.173369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.008 [2024-12-09 04:12:44.173381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84032 len:8 PRP1 0x0 PRP2 0x0 00:22:30.008 [2024-12-09 04:12:44.173399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.173413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.008 [2024-12-09 04:12:44.173424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.008 [2024-12-09 04:12:44.173436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84040 len:8 PRP1 0x0 PRP2 0x0 00:22:30.008 [2024-12-09 04:12:44.173450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.173463] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.008 [2024-12-09 04:12:44.173473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.008 [2024-12-09 04:12:44.173485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84048 len:8 PRP1 0x0 PRP2 0x0 00:22:30.008 [2024-12-09 04:12:44.173498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.173511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.008 [2024-12-09 04:12:44.173522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.008 [2024-12-09 04:12:44.173533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84056 len:8 PRP1 0x0 PRP2 0x0 00:22:30.008 [2024-12-09 04:12:44.173546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.173558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.008 [2024-12-09 04:12:44.173571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.008 [2024-12-09 04:12:44.173597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84064 len:8 PRP1 0x0 PRP2 0x0 00:22:30.008 [2024-12-09 04:12:44.173610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.173623] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.008 [2024-12-09 04:12:44.173636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.008 [2024-12-09 04:12:44.173647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84072 len:8 PRP1 0x0 PRP2 0x0 00:22:30.008 [2024-12-09 04:12:44.173660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.173672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.008 [2024-12-09 04:12:44.173683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.008 [2024-12-09 04:12:44.173693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84080 len:8 PRP1 0x0 PRP2 0x0 00:22:30.008 [2024-12-09 04:12:44.173706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.173719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.008 [2024-12-09 04:12:44.173729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.008 [2024-12-09 04:12:44.173740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84088 len:8 PRP1 0x0 PRP2 0x0 00:22:30.008 [2024-12-09 04:12:44.173756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.173770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.008 [2024-12-09 04:12:44.173781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.008 [2024-12-09 04:12:44.173792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84096 len:8 PRP1 0x0 PRP2 0x0 00:22:30.008 [2024-12-09 04:12:44.173810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.173823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.008 [2024-12-09 04:12:44.173834] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.008 [2024-12-09 04:12:44.173845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84104 len:8 PRP1 0x0 PRP2 0x0 00:22:30.008 [2024-12-09 04:12:44.173858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.173871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.008 [2024-12-09 04:12:44.173882] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.008 [2024-12-09 04:12:44.173892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84112 len:8 PRP1 0x0 PRP2 0x0 00:22:30.008 [2024-12-09 04:12:44.173905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.173918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.008 [2024-12-09 04:12:44.173929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.008 [2024-12-09 04:12:44.173940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84120 len:8 PRP1 0x0 PRP2 0x0 00:22:30.008 [2024-12-09 04:12:44.173952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.173966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.008 [2024-12-09 04:12:44.173989] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.008 [2024-12-09 04:12:44.174000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84128 len:8 PRP1 0x0 PRP2 0x0 00:22:30.008 [2024-12-09 04:12:44.174012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.174025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.008 [2024-12-09 04:12:44.174036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.008 [2024-12-09 04:12:44.174055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84136 len:8 PRP1 0x0 PRP2 0x0 00:22:30.008 [2024-12-09 04:12:44.174068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.174081] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.008 [2024-12-09 04:12:44.174091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.008 [2024-12-09 04:12:44.174103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84144 len:8 PRP1 0x0 PRP2 0x0 00:22:30.008 [2024-12-09 04:12:44.174116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.174128] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.008 [2024-12-09 04:12:44.174143] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.008 [2024-12-09 04:12:44.174154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84152 len:8 PRP1 0x0 PRP2 0x0 00:22:30.008 [2024-12-09 04:12:44.174167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.174180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.008 [2024-12-09 04:12:44.174191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.008 [2024-12-09 04:12:44.174202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84160 len:8 PRP1 0x0 PRP2 0x0 00:22:30.008 [2024-12-09 04:12:44.174219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.174315] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:30.008 [2024-12-09 04:12:44.174356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.008 [2024-12-09 04:12:44.174375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.174390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.008 [2024-12-09 04:12:44.174404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.174418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.008 [2024-12-09 04:12:44.174431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.174446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.008 [2024-12-09 04:12:44.174459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:44.174472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:30.008 [2024-12-09 04:12:44.174523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf9180 (9): Bad file descriptor 00:22:30.008 [2024-12-09 04:12:44.177935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:30.008 [2024-12-09 04:12:44.327739] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:30.008 7898.50 IOPS, 30.85 MiB/s [2024-12-09T03:12:58.584Z] 8184.33 IOPS, 31.97 MiB/s [2024-12-09T03:12:58.584Z] 8283.25 IOPS, 32.36 MiB/s [2024-12-09T03:12:58.584Z] [2024-12-09 04:12:47.867462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.008 [2024-12-09 04:12:47.867505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:47.867532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.008 [2024-12-09 04:12:47.867548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:47.867565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.008 [2024-12-09 04:12:47.867580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:47.867597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.008 [2024-12-09 04:12:47.867617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:47.867649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.008 [2024-12-09 04:12:47.867663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:47.867677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.008 [2024-12-09 04:12:47.867706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:47.867721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.008 [2024-12-09 04:12:47.867734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:47.867749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.008 [2024-12-09 04:12:47.867762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:47.867776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.008 [2024-12-09 04:12:47.867789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:47.867803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.008 [2024-12-09 04:12:47.867816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:47.867830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.008 [2024-12-09 04:12:47.867843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:47.867857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.008 [2024-12-09 04:12:47.867870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:47.867885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.008 [2024-12-09 04:12:47.867899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:47.867914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.008 [2024-12-09 04:12:47.867927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:47.867943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.008 [2024-12-09 04:12:47.867957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:47.867973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.008 [2024-12-09 04:12:47.867987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:47.868006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.008 [2024-12-09 04:12:47.868019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:47.868034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.008 [2024-12-09 04:12:47.868048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:47.868063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:112760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.008 [2024-12-09 04:12:47.868077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:47.868091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.008 [2024-12-09 04:12:47.868104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:47.868119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.008 [2024-12-09 04:12:47.868132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.008 [2024-12-09 04:12:47.868146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.868975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.868988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:113136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:113176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.009 [2024-12-09 04:12:47.869818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.009 [2024-12-09 04:12:47.869847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.009 [2024-12-09 04:12:47.869890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.009 [2024-12-09 04:12:47.869919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.009 [2024-12-09 04:12:47.869949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.009 [2024-12-09 04:12:47.869964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.869978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.869993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.870979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.870993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.871008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.871022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.871037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:113536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.871050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.871065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.871082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.871098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.871117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.871133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.871146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.871161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.871175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.871189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.871203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.871218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.871232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.871247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.871261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.871297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.871313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.871329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.871344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.871359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.871374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.871389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:47.871403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.871449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.010 [2024-12-09 04:12:47.871465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.010 [2024-12-09 04:12:47.871477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113632 len:8 PRP1 0x0 PRP2 0x0 00:22:30.010 [2024-12-09 04:12:47.871491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.871556] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:30.010 [2024-12-09 04:12:47.871617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.010 [2024-12-09 04:12:47.871636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.871666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.010 [2024-12-09 04:12:47.871680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.871694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.010 [2024-12-09 04:12:47.871708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.871722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.010 [2024-12-09 04:12:47.871736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:47.871754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:30.010 [2024-12-09 04:12:47.871818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf9180 (9): Bad file descriptor 00:22:30.010 [2024-12-09 04:12:47.875188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:30.010 [2024-12-09 04:12:47.906417] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:30.010 8252.00 IOPS, 32.23 MiB/s [2024-12-09T03:12:58.586Z] 8270.00 IOPS, 32.30 MiB/s [2024-12-09T03:12:58.586Z] 8307.71 IOPS, 32.45 MiB/s [2024-12-09T03:12:58.586Z] 8345.88 IOPS, 32.60 MiB/s [2024-12-09T03:12:58.586Z] 8376.00 IOPS, 32.72 MiB/s [2024-12-09T03:12:58.586Z] [2024-12-09 04:12:52.477937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:52.478001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:52.478030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:52.478048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:52.478065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:41552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:52.478080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:52.478096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:52.478111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:52.478128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:52.478144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:52.478160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:41576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:52.478194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:52.478211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:41584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:52.478251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:52.478268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:52.478291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:52.478322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:41600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:52.478337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:52.478352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:41608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:52.478367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:52.478383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:41616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:52.478397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:52.478413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:52.478427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:52.478443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:41632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:52.478458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.010 [2024-12-09 04:12:52.478474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:41640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.010 [2024-12-09 04:12:52.478488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.478503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:41648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.478518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.478534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.478548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.478564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.478578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.478594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.478626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.478642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:41680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.478656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.478690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:41688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.478705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.478720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:41696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.478734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.478749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.478763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.478778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.478792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.478807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.478821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.478836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:41728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.478850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.478865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:41736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.478879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.478894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.478907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.478921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.478935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.478950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.478963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.478978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.478991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:41776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:41792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:41800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:41816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:41832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:41840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:41856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:41864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:41872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:41896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:41912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:41920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:41944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:41952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:41960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:41976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.011 [2024-12-09 04:12:52.479833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.011 [2024-12-09 04:12:52.479883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:41040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.011 [2024-12-09 04:12:52.479912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:41048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.011 [2024-12-09 04:12:52.479941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.011 [2024-12-09 04:12:52.479969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.479984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:41064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.011 [2024-12-09 04:12:52.479998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.480013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:41072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.011 [2024-12-09 04:12:52.480027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.480042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.011 [2024-12-09 04:12:52.480055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.480070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:41088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.011 [2024-12-09 04:12:52.480084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.480099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:41096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.011 [2024-12-09 04:12:52.480113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.480143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.011 [2024-12-09 04:12:52.480157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.480173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.011 [2024-12-09 04:12:52.480187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.480202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:41120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.011 [2024-12-09 04:12:52.480215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.480231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:41128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.011 [2024-12-09 04:12:52.480249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.480265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:41136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.011 [2024-12-09 04:12:52.480287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.480303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:41144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.011 [2024-12-09 04:12:52.480317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.480334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:41152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.011 [2024-12-09 04:12:52.480348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.480365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.011 [2024-12-09 04:12:52.480378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.480394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.011 [2024-12-09 04:12:52.480408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.480424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:41176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.011 [2024-12-09 04:12:52.480438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.011 [2024-12-09 04:12:52.480453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:41184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.011 [2024-12-09 04:12:52.480467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.480482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:41192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.480496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.480511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.480532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.480548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.480562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.480577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.012 [2024-12-09 04:12:52.480592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.480607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:41216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.480621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.480640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.480655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.480671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.480685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.480717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.480731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.480746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.480760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.480775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:41256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.480788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.480803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:41264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.480817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.480833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:41272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.480847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.480862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.480876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.480891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.012 [2024-12-09 04:12:52.480905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.480921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.012 [2024-12-09 04:12:52.480934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.480949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.012 [2024-12-09 04:12:52.480963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.480978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.012 [2024-12-09 04:12:52.480992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:42032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.012 [2024-12-09 04:12:52.481021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.012 [2024-12-09 04:12:52.481055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:41296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:41320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:41328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:41352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:41376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:30.012 [2024-12-09 04:12:52.481565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:41408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:41440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:41456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:41464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:41472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:41488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:41504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.481975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:41512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.481989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.482005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:41520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.012 [2024-12-09 04:12:52.482019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.482034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd1c010 is same with the state(6) to be set 00:22:30.012 [2024-12-09 04:12:52.482051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:30.012 [2024-12-09 04:12:52.482063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:30.012 [2024-12-09 04:12:52.482075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41528 len:8 PRP1 0x0 PRP2 0x0 00:22:30.012 [2024-12-09 04:12:52.482088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.482151] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:30.012 [2024-12-09 04:12:52.482191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.012 [2024-12-09 04:12:52.482209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.482225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.012 [2024-12-09 04:12:52.482250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.482285] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.012 [2024-12-09 04:12:52.482301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.482320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.012 [2024-12-09 04:12:52.482334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.012 [2024-12-09 04:12:52.482348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:30.012 [2024-12-09 04:12:52.482405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcf9180 (9): Bad file descriptor 00:22:30.012 [2024-12-09 04:12:52.485733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:30.012 [2024-12-09 04:12:52.555053] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:30.012 8317.30 IOPS, 32.49 MiB/s [2024-12-09T03:12:58.588Z] 8311.18 IOPS, 32.47 MiB/s [2024-12-09T03:12:58.588Z] 8310.83 IOPS, 32.46 MiB/s [2024-12-09T03:12:58.588Z] 8305.85 IOPS, 32.44 MiB/s [2024-12-09T03:12:58.588Z] 8302.29 IOPS, 32.43 MiB/s [2024-12-09T03:12:58.588Z] 8299.93 IOPS, 32.42 MiB/s 00:22:30.012 Latency(us) 00:22:30.012 [2024-12-09T03:12:58.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.012 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:30.012 Verification LBA range: start 0x0 length 0x4000 00:22:30.012 NVMe0n1 : 15.04 8276.82 32.33 684.80 0.00 14218.12 570.41 43690.67 00:22:30.012 [2024-12-09T03:12:58.588Z] =================================================================================================================== 00:22:30.012 [2024-12-09T03:12:58.588Z] Total : 8276.82 32.33 684.80 0.00 14218.12 570.41 43690.67 00:22:30.012 Received shutdown signal, test time was about 15.000000 seconds 00:22:30.012 00:22:30.012 Latency(us) 00:22:30.012 [2024-12-09T03:12:58.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.012 [2024-12-09T03:12:58.588Z] =================================================================================================================== 00:22:30.012 [2024-12-09T03:12:58.588Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:30.012 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:30.012 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:30.012 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:30.012 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=304067 00:22:30.012 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:30.012 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 304067 /var/tmp/bdevperf.sock 00:22:30.012 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 304067 ']' 00:22:30.012 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:30.012 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.012 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:30.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:30.012 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.012 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:30.270 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.270 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:30.270 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:30.526 [2024-12-09 04:12:58.857038] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:30.526 04:12:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:30.783 [2024-12-09 04:12:59.121745] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:30.783 04:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:31.347 NVMe0n1 00:22:31.347 04:12:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:31.604 00:22:31.605 04:13:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:32.168 00:22:32.169 04:13:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:32.169 04:13:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:32.426 04:13:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:32.686 04:13:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:35.966 04:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:35.966 04:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:35.966 04:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=304737 00:22:35.966 04:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:35.966 04:13:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 304737 00:22:36.900 { 00:22:36.900 "results": [ 00:22:36.900 { 00:22:36.900 "job": "NVMe0n1", 00:22:36.900 "core_mask": "0x1", 00:22:36.900 "workload": "verify", 00:22:36.900 "status": "finished", 00:22:36.900 "verify_range": { 00:22:36.900 "start": 0, 00:22:36.900 "length": 16384 00:22:36.900 }, 00:22:36.900 "queue_depth": 128, 00:22:36.900 "io_size": 4096, 00:22:36.900 "runtime": 1.012367, 00:22:36.900 "iops": 8487.040766836533, 00:22:36.900 "mibps": 33.15250299545521, 00:22:36.900 "io_failed": 0, 00:22:36.900 "io_timeout": 0, 00:22:36.900 "avg_latency_us": 14983.00993999586, 00:22:36.900 "min_latency_us": 1953.9437037037037, 00:22:36.900 "max_latency_us": 14854.826666666666 00:22:36.900 } 00:22:36.900 ], 00:22:36.900 "core_count": 1 00:22:36.900 } 00:22:36.900 04:13:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:36.900 [2024-12-09 04:12:58.365050] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:22:36.900 [2024-12-09 04:12:58.365147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid304067 ] 00:22:36.900 [2024-12-09 04:12:58.433880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.900 [2024-12-09 04:12:58.490322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.900 [2024-12-09 04:13:01.014247] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:36.900 [2024-12-09 04:13:01.014374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.900 [2024-12-09 04:13:01.014398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.900 [2024-12-09 04:13:01.014417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.900 [2024-12-09 04:13:01.014430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.900 [2024-12-09 04:13:01.014444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.900 [2024-12-09 04:13:01.014458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.900 [2024-12-09 04:13:01.014473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:36.900 [2024-12-09 04:13:01.014487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:36.900 [2024-12-09 04:13:01.014501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:22:36.900 [2024-12-09 04:13:01.014551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:22:36.900 [2024-12-09 04:13:01.014585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15aa180 (9): Bad file descriptor 00:22:36.900 [2024-12-09 04:13:01.060540] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:22:36.900 Running I/O for 1 seconds... 00:22:36.900 8400.00 IOPS, 32.81 MiB/s 00:22:36.900 Latency(us) 00:22:36.900 [2024-12-09T03:13:05.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:36.900 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:36.900 Verification LBA range: start 0x0 length 0x4000 00:22:36.900 NVMe0n1 : 1.01 8487.04 33.15 0.00 0.00 14983.01 1953.94 14854.83 00:22:36.900 [2024-12-09T03:13:05.476Z] =================================================================================================================== 00:22:36.900 [2024-12-09T03:13:05.476Z] Total : 8487.04 33.15 0.00 0.00 14983.01 1953.94 14854.83 00:22:36.900 04:13:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:36.900 04:13:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:37.157 04:13:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:37.415 04:13:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:37.415 04:13:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:37.672 04:13:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:38.236 04:13:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:41.511 04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:41.511 04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:41.511 04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 304067 00:22:41.511 04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 304067 ']' 00:22:41.511 04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 304067 00:22:41.511 04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:41.511 04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.511 04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 304067 00:22:41.511 04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:41.511 04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:41.511 04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 304067' 00:22:41.511 killing process with pid 304067 00:22:41.511 04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 304067 00:22:41.511 04:13:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 304067 00:22:41.511 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:41.511 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:42.073 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:42.073 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:42.073 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:42.073 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:42.073 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:42.073 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:42.073 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:42.073 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:42.074 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:42.074 rmmod nvme_tcp 00:22:42.074 rmmod nvme_fabrics 00:22:42.074 rmmod nvme_keyring 00:22:42.074 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:42.074 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:42.074 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:42.074 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 301801 ']' 00:22:42.074 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 301801 00:22:42.074 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 301801 ']' 00:22:42.074 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 301801 00:22:42.074 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:42.074 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:42.074 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 301801 00:22:42.074 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:42.074 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:42.074 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 301801' 00:22:42.074 killing process with pid 301801 00:22:42.074 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 301801 00:22:42.074 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 301801 00:22:42.332 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:42.332 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:42.332 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:42.332 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:22:42.332 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:22:42.332 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:42.332 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:22:42.332 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:42.332 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:42.332 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.332 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:42.332 04:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.237 04:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:44.237 00:22:44.237 real 0m35.880s 00:22:44.237 user 2m6.032s 00:22:44.237 sys 0m6.176s 00:22:44.237 04:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:44.237 04:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:44.237 ************************************ 00:22:44.237 END TEST nvmf_failover 00:22:44.237 ************************************ 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:44.496 ************************************ 00:22:44.496 START TEST nvmf_host_discovery 00:22:44.496 ************************************ 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:44.496 * Looking for test storage... 00:22:44.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:44.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.496 --rc genhtml_branch_coverage=1 00:22:44.496 --rc genhtml_function_coverage=1 00:22:44.496 --rc genhtml_legend=1 00:22:44.496 --rc geninfo_all_blocks=1 00:22:44.496 --rc geninfo_unexecuted_blocks=1 00:22:44.496 00:22:44.496 ' 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:44.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.496 --rc genhtml_branch_coverage=1 00:22:44.496 --rc genhtml_function_coverage=1 00:22:44.496 --rc genhtml_legend=1 00:22:44.496 --rc geninfo_all_blocks=1 00:22:44.496 --rc geninfo_unexecuted_blocks=1 00:22:44.496 00:22:44.496 ' 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:44.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.496 --rc genhtml_branch_coverage=1 00:22:44.496 --rc genhtml_function_coverage=1 00:22:44.496 --rc genhtml_legend=1 00:22:44.496 --rc geninfo_all_blocks=1 00:22:44.496 --rc geninfo_unexecuted_blocks=1 00:22:44.496 00:22:44.496 ' 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:44.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.496 --rc genhtml_branch_coverage=1 00:22:44.496 --rc genhtml_function_coverage=1 00:22:44.496 --rc genhtml_legend=1 00:22:44.496 --rc geninfo_all_blocks=1 00:22:44.496 --rc geninfo_unexecuted_blocks=1 00:22:44.496 00:22:44.496 ' 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.496 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:44.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:22:44.497 04:13:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:46.403 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.403 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:22:46.403 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:46.403 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:46.403 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:46.403 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:46.403 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:46.404 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:46.404 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:46.404 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:46.404 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:46.404 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:46.662 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:46.662 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:46.662 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.662 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.662 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.662 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:46.662 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.662 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.662 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:46.662 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:46.662 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.662 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.662 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:46.662 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:46.662 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.662 04:13:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.662 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.662 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:46.662 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:46.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:22:46.663 00:22:46.663 --- 10.0.0.2 ping statistics --- 00:22:46.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.663 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:46.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:22:46.663 00:22:46.663 --- 10.0.0.1 ping statistics --- 00:22:46.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.663 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=307466 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 307466 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 307466 ']' 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:46.663 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:46.921 [2024-12-09 04:13:15.263974] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:22:46.921 [2024-12-09 04:13:15.264065] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.921 [2024-12-09 04:13:15.338208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.921 [2024-12-09 04:13:15.397098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.921 [2024-12-09 04:13:15.397170] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.921 [2024-12-09 04:13:15.397184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.921 [2024-12-09 04:13:15.397196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.921 [2024-12-09 04:13:15.397205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.921 [2024-12-09 04:13:15.397798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.178 [2024-12-09 04:13:15.546189] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.178 [2024-12-09 04:13:15.554446] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.178 null0 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.178 null1 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=307491 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 307491 /tmp/host.sock 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 307491 ']' 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:47.178 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:47.178 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.178 [2024-12-09 04:13:15.628481] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:22:47.178 [2024-12-09 04:13:15.628572] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid307491 ] 00:22:47.178 [2024-12-09 04:13:15.693793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.178 [2024-12-09 04:13:15.750540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:47.436 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.437 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.437 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:47.437 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:47.437 04:13:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.437 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:47.437 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:47.437 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.437 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.437 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.695 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:47.695 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:47.695 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:47.695 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.695 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.695 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:47.695 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:47.695 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.695 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:47.695 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:47.695 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.695 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.695 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:47.695 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.695 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:47.695 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:47.695 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.695 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:47.695 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:47.695 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.695 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.696 [2024-12-09 04:13:16.172011] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.696 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:22:47.955 04:13:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:48.520 [2024-12-09 04:13:16.973386] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:48.520 [2024-12-09 04:13:16.973412] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:48.520 [2024-12-09 04:13:16.973436] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:48.520 [2024-12-09 04:13:17.060730] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:48.777 [2024-12-09 04:13:17.243805] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:22:48.777 [2024-12-09 04:13:17.244748] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1573aa0:1 started. 00:22:48.777 [2024-12-09 04:13:17.246525] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:48.777 [2024-12-09 04:13:17.246548] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:48.777 [2024-12-09 04:13:17.292989] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1573aa0 was disconnected and freed. delete nvme_qpair. 00:22:48.777 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:48.777 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:48.777 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:48.777 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:48.777 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:48.777 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.777 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:48.777 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.777 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:48.777 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:49.035 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.036 [2024-12-09 04:13:17.516359] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1542230:1 started. 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:49.036 [2024-12-09 04:13:17.523431] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1542230 was disconnected and freed. delete nvme_qpair. 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.036 [2024-12-09 04:13:17.600428] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:49.036 [2024-12-09 04:13:17.600647] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:49.036 [2024-12-09 04:13:17.600678] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:49.036 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:49.293 [2024-12-09 04:13:17.686929] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:49.293 04:13:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:49.293 [2024-12-09 04:13:17.787875] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:22:49.293 [2024-12-09 04:13:17.787930] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:49.293 [2024-12-09 04:13:17.787945] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:49.293 [2024-12-09 04:13:17.787953] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.223 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.482 [2024-12-09 04:13:18.812822] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:50.482 [2024-12-09 04:13:18.812861] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:50.482 [2024-12-09 04:13:18.817862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.482 [2024-12-09 04:13:18.817896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:50.482 [2024-12-09 04:13:18.817929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.482 [2024-12-09 04:13:18.817945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.482 [2024-12-09 04:13:18.817967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.482 [2024-12-09 04:13:18.817981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.482 [2024-12-09 04:13:18.817995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.482 [2024-12-09 04:13:18.818008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.482 [2024-12-09 04:13:18.818022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:50.482 [2024-12-09 04:13:18.827854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.482 [2024-12-09 04:13:18.837894] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:50.482 [2024-12-09 04:13:18.837915] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:50.482 [2024-12-09 04:13:18.837928] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:50.482 [2024-12-09 04:13:18.837937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:50.482 [2024-12-09 04:13:18.837969] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:50.482 [2024-12-09 04:13:18.838149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.482 [2024-12-09 04:13:18.838179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420 00:22:50.482 [2024-12-09 04:13:18.838196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set 00:22:50.482 [2024-12-09 04:13:18.838218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor 00:22:50.482 [2024-12-09 04:13:18.838239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:50.482 [2024-12-09 04:13:18.838287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:50.482 [2024-12-09 04:13:18.838306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:50.482 [2024-12-09 04:13:18.838320] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:50.482 [2024-12-09 04:13:18.838330] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:50.482 [2024-12-09 04:13:18.838338] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:50.482 [2024-12-09 04:13:18.848001] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:50.482 [2024-12-09 04:13:18.848021] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:50.482 [2024-12-09 04:13:18.848035] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:50.482 [2024-12-09 04:13:18.848042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:50.482 [2024-12-09 04:13:18.848067] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:50.482 [2024-12-09 04:13:18.848295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.482 [2024-12-09 04:13:18.848323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420 00:22:50.482 [2024-12-09 04:13:18.848340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set 00:22:50.482 [2024-12-09 04:13:18.848363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor 00:22:50.482 [2024-12-09 04:13:18.848383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:50.482 [2024-12-09 04:13:18.848397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:50.482 [2024-12-09 04:13:18.848412] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:50.482 [2024-12-09 04:13:18.848425] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:50.482 [2024-12-09 04:13:18.848434] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:50.482 [2024-12-09 04:13:18.848441] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.482 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:50.483 [2024-12-09 04:13:18.858100] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:50.483 [2024-12-09 04:13:18.858123] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:50.483 [2024-12-09 04:13:18.858132] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:50.483 [2024-12-09 04:13:18.858139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:50.483 [2024-12-09 04:13:18.858165] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:50.483 [2024-12-09 04:13:18.858351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.483 [2024-12-09 04:13:18.858390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420 00:22:50.483 [2024-12-09 04:13:18.858409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set 00:22:50.483 [2024-12-09 04:13:18.858432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor 00:22:50.483 [2024-12-09 04:13:18.858453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:50.483 [2024-12-09 04:13:18.858467] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:50.483 [2024-12-09 04:13:18.858481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:50.483 [2024-12-09 04:13:18.858494] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:50.483 [2024-12-09 04:13:18.858503] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:50.483 [2024-12-09 04:13:18.858511] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:50.483 [2024-12-09 04:13:18.868199] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:50.483 [2024-12-09 04:13:18.868222] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:50.483 [2024-12-09 04:13:18.868230] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:50.483 [2024-12-09 04:13:18.868237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:50.483 [2024-12-09 04:13:18.868284] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:50.483 [2024-12-09 04:13:18.868426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.483 [2024-12-09 04:13:18.868455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420 00:22:50.483 [2024-12-09 04:13:18.868472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set 00:22:50.483 [2024-12-09 04:13:18.868496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor 00:22:50.483 [2024-12-09 04:13:18.868517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:50.483 [2024-12-09 04:13:18.868531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:50.483 [2024-12-09 04:13:18.868545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:50.483 [2024-12-09 04:13:18.868568] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:50.483 [2024-12-09 04:13:18.868577] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:50.483 [2024-12-09 04:13:18.868584] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:50.483 [2024-12-09 04:13:18.878318] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:50.483 [2024-12-09 04:13:18.878339] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:50.483 [2024-12-09 04:13:18.878347] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:50.483 [2024-12-09 04:13:18.878355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:50.483 [2024-12-09 04:13:18.878379] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:50.483 [2024-12-09 04:13:18.878551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.483 [2024-12-09 04:13:18.878579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420 00:22:50.483 [2024-12-09 04:13:18.878596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set 00:22:50.483 [2024-12-09 04:13:18.878618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor 00:22:50.483 [2024-12-09 04:13:18.878639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:50.483 [2024-12-09 04:13:18.878652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:50.483 [2024-12-09 04:13:18.878666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:50.483 [2024-12-09 04:13:18.878678] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:50.483 [2024-12-09 04:13:18.878687] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:50.483 [2024-12-09 04:13:18.878695] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.483 [2024-12-09 04:13:18.888414] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:50.483 [2024-12-09 04:13:18.888434] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:50.483 [2024-12-09 04:13:18.888443] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:50.483 [2024-12-09 04:13:18.888450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:50.483 [2024-12-09 04:13:18.888474] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:50.483 [2024-12-09 04:13:18.888714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.483 [2024-12-09 04:13:18.888741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420 00:22:50.483 [2024-12-09 04:13:18.888758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set 00:22:50.483 [2024-12-09 04:13:18.888779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor 00:22:50.483 [2024-12-09 04:13:18.888799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:50.483 [2024-12-09 04:13:18.888813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:50.483 [2024-12-09 04:13:18.888827] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:50.483 [2024-12-09 04:13:18.888839] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:50.483 [2024-12-09 04:13:18.888863] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:50.483 [2024-12-09 04:13:18.888871] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:50.483 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:50.483 [2024-12-09 04:13:18.898509] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:50.483 [2024-12-09 04:13:18.898533] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:50.483 [2024-12-09 04:13:18.898542] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:50.483 [2024-12-09 04:13:18.898575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:50.483 [2024-12-09 04:13:18.898601] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:50.483 [2024-12-09 04:13:18.898790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.484 [2024-12-09 04:13:18.898818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420 00:22:50.484 [2024-12-09 04:13:18.898836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set 00:22:50.484 [2024-12-09 04:13:18.898858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor 00:22:50.484 [2024-12-09 04:13:18.898879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:50.484 [2024-12-09 04:13:18.898893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:50.484 [2024-12-09 04:13:18.898908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:50.484 [2024-12-09 04:13:18.898922] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:50.484 [2024-12-09 04:13:18.898933] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:50.484 [2024-12-09 04:13:18.898943] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:50.484 [2024-12-09 04:13:18.908636] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpa 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.484 irs for reset. 00:22:50.484 [2024-12-09 04:13:18.908678] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:50.484 [2024-12-09 04:13:18.908687] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:50.484 [2024-12-09 04:13:18.908694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:50.484 [2024-12-09 04:13:18.908718] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:50.484 [2024-12-09 04:13:18.908889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.484 [2024-12-09 04:13:18.908917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420 00:22:50.484 [2024-12-09 04:13:18.908935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set 00:22:50.484 [2024-12-09 04:13:18.908958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor 00:22:50.484 [2024-12-09 04:13:18.908979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:50.484 [2024-12-09 04:13:18.908993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:50.484 [2024-12-09 04:13:18.909008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:50.484 [2024-12-09 04:13:18.909021] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:50.484 [2024-12-09 04:13:18.909030] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:50.484 [2024-12-09 04:13:18.909037] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:50.484 [2024-12-09 04:13:18.918752] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:50.484 [2024-12-09 04:13:18.918771] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:50.484 [2024-12-09 04:13:18.918779] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:50.484 [2024-12-09 04:13:18.918786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:50.484 [2024-12-09 04:13:18.918809] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:50.484 [2024-12-09 04:13:18.919002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.484 [2024-12-09 04:13:18.919029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420 00:22:50.484 [2024-12-09 04:13:18.919045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set 00:22:50.484 [2024-12-09 04:13:18.919067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor 00:22:50.484 [2024-12-09 04:13:18.919087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:50.484 [2024-12-09 04:13:18.919102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:50.484 [2024-12-09 04:13:18.919115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:50.484 [2024-12-09 04:13:18.919128] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:50.484 [2024-12-09 04:13:18.919136] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:50.484 [2024-12-09 04:13:18.919144] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:50.484 [2024-12-09 04:13:18.928842] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:50.484 [2024-12-09 04:13:18.928861] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:50.484 [2024-12-09 04:13:18.928870] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:50.484 [2024-12-09 04:13:18.928876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:50.484 [2024-12-09 04:13:18.928904] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:50.484 [2024-12-09 04:13:18.929096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.484 [2024-12-09 04:13:18.929124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420 00:22:50.484 [2024-12-09 04:13:18.929140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set 00:22:50.484 [2024-12-09 04:13:18.929162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor 00:22:50.484 [2024-12-09 04:13:18.929181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:50.484 [2024-12-09 04:13:18.929195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:50.484 [2024-12-09 04:13:18.929208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:50.484 [2024-12-09 04:13:18.929220] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:50.484 [2024-12-09 04:13:18.929228] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:50.484 [2024-12-09 04:13:18.929236] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:50.484 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:22:50.484 04:13:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:50.484 [2024-12-09 04:13:18.938937] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:22:50.484 [2024-12-09 04:13:18.938957] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:22:50.484 [2024-12-09 04:13:18.938965] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:22:50.484 [2024-12-09 04:13:18.938972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:50.484 [2024-12-09 04:13:18.938997] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:22:50.484 [2024-12-09 04:13:18.939098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.484 [2024-12-09 04:13:18.939137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1544050 with addr=10.0.0.2, port=4420 00:22:50.484 [2024-12-09 04:13:18.939153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1544050 is same with the state(6) to be set 00:22:50.484 [2024-12-09 04:13:18.939174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1544050 (9): Bad file descriptor 00:22:50.484 [2024-12-09 04:13:18.939193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:22:50.484 [2024-12-09 04:13:18.939207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:22:50.484 [2024-12-09 04:13:18.939220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:22:50.484 [2024-12-09 04:13:18.939233] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:22:50.484 [2024-12-09 04:13:18.939241] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:22:50.484 [2024-12-09 04:13:18.939249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:22:50.484 [2024-12-09 04:13:18.940430] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:50.484 [2024-12-09 04:13:18.940464] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.415 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.673 04:13:19 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.673 04:13:20 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.042 [2024-12-09 04:13:21.225430] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:53.042 [2024-12-09 04:13:21.225463] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:53.042 [2024-12-09 04:13:21.225487] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:53.042 [2024-12-09 04:13:21.311754] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:53.042 [2024-12-09 04:13:21.377446] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:22:53.042 [2024-12-09 04:13:21.378207] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x16a5550:1 started. 00:22:53.042 [2024-12-09 04:13:21.380376] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:53.042 [2024-12-09 04:13:21.380423] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:53.042 [2024-12-09 04:13:21.383334] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x16a5550 was disconnected and freed. delete nvme_qpair. 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.042 request: 00:22:53.042 { 00:22:53.042 "name": "nvme", 00:22:53.042 "trtype": "tcp", 00:22:53.042 "traddr": "10.0.0.2", 00:22:53.042 "adrfam": "ipv4", 00:22:53.042 "trsvcid": "8009", 00:22:53.042 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:53.042 "wait_for_attach": true, 00:22:53.042 "method": "bdev_nvme_start_discovery", 00:22:53.042 "req_id": 1 00:22:53.042 } 00:22:53.042 Got JSON-RPC error response 00:22:53.042 response: 00:22:53.042 { 00:22:53.042 "code": -17, 00:22:53.042 "message": "File exists" 00:22:53.042 } 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.042 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.042 request: 00:22:53.042 { 00:22:53.042 "name": "nvme_second", 00:22:53.042 "trtype": "tcp", 00:22:53.042 "traddr": "10.0.0.2", 00:22:53.042 "adrfam": "ipv4", 00:22:53.042 "trsvcid": "8009", 00:22:53.042 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:53.042 "wait_for_attach": true, 00:22:53.042 "method": "bdev_nvme_start_discovery", 00:22:53.042 "req_id": 1 00:22:53.042 } 00:22:53.042 Got JSON-RPC error response 00:22:53.042 response: 00:22:53.042 { 00:22:53.042 "code": -17, 00:22:53.042 "message": "File exists" 00:22:53.043 } 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.043 04:13:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.416 [2024-12-09 04:13:22.592404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.416 [2024-12-09 04:13:22.592451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x155a510 with addr=10.0.0.2, port=8010 00:22:54.416 [2024-12-09 04:13:22.592503] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:54.416 [2024-12-09 04:13:22.592527] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:54.416 [2024-12-09 04:13:22.592539] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:55.349 [2024-12-09 04:13:23.594853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.349 [2024-12-09 04:13:23.594914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x155a510 with addr=10.0.0.2, port=8010 00:22:55.349 [2024-12-09 04:13:23.594944] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:55.349 [2024-12-09 04:13:23.594960] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:55.349 [2024-12-09 04:13:23.594988] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:56.282 [2024-12-09 04:13:24.596980] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:56.282 request: 00:22:56.282 { 00:22:56.282 "name": "nvme_second", 00:22:56.282 "trtype": "tcp", 00:22:56.282 "traddr": "10.0.0.2", 00:22:56.282 "adrfam": "ipv4", 00:22:56.282 "trsvcid": "8010", 00:22:56.282 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:56.282 "wait_for_attach": false, 00:22:56.282 "attach_timeout_ms": 3000, 00:22:56.282 "method": "bdev_nvme_start_discovery", 00:22:56.282 "req_id": 1 00:22:56.282 } 00:22:56.282 Got JSON-RPC error response 00:22:56.282 response: 00:22:56.282 { 00:22:56.282 "code": -110, 00:22:56.282 "message": "Connection timed out" 00:22:56.282 } 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 307491 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:56.282 rmmod nvme_tcp 00:22:56.282 rmmod nvme_fabrics 00:22:56.282 rmmod nvme_keyring 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 307466 ']' 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 307466 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 307466 ']' 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 307466 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 307466 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 307466' 00:22:56.282 killing process with pid 307466 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 307466 00:22:56.282 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 307466 00:22:56.540 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:56.541 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:56.541 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:56.541 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:22:56.541 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:22:56.541 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:56.541 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:22:56.541 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:56.541 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:56.541 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.541 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:56.541 04:13:24 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:59.071 00:22:59.071 real 0m14.180s 00:22:59.071 user 0m20.863s 00:22:59.071 sys 0m2.864s 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:59.071 ************************************ 00:22:59.071 END TEST nvmf_host_discovery 00:22:59.071 ************************************ 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.071 ************************************ 00:22:59.071 START TEST nvmf_host_multipath_status 00:22:59.071 ************************************ 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:59.071 * Looking for test storage... 00:22:59.071 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:59.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.071 --rc genhtml_branch_coverage=1 00:22:59.071 --rc genhtml_function_coverage=1 00:22:59.071 --rc genhtml_legend=1 00:22:59.071 --rc geninfo_all_blocks=1 00:22:59.071 --rc geninfo_unexecuted_blocks=1 00:22:59.071 00:22:59.071 ' 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:59.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.071 --rc genhtml_branch_coverage=1 00:22:59.071 --rc genhtml_function_coverage=1 00:22:59.071 --rc genhtml_legend=1 00:22:59.071 --rc geninfo_all_blocks=1 00:22:59.071 --rc geninfo_unexecuted_blocks=1 00:22:59.071 00:22:59.071 ' 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:59.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.071 --rc genhtml_branch_coverage=1 00:22:59.071 --rc genhtml_function_coverage=1 00:22:59.071 --rc genhtml_legend=1 00:22:59.071 --rc geninfo_all_blocks=1 00:22:59.071 --rc geninfo_unexecuted_blocks=1 00:22:59.071 00:22:59.071 ' 00:22:59.071 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:59.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:59.071 --rc genhtml_branch_coverage=1 00:22:59.071 --rc genhtml_function_coverage=1 00:22:59.071 --rc genhtml_legend=1 00:22:59.071 --rc geninfo_all_blocks=1 00:22:59.071 --rc geninfo_unexecuted_blocks=1 00:22:59.071 00:22:59.071 ' 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:59.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:22:59.072 04:13:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:00.974 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:00.974 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:00.974 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:00.974 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:00.974 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:00.975 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:00.975 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.283 ms 00:23:00.975 00:23:00.975 --- 10.0.0.2 ping statistics --- 00:23:00.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.975 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:00.975 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:00.975 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:23:00.975 00:23:00.975 --- 10.0.0.1 ping statistics --- 00:23:00.975 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:00.975 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=310669 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 310669 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 310669 ']' 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.975 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:01.233 [2024-12-09 04:13:29.558198] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:23:01.233 [2024-12-09 04:13:29.558308] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.233 [2024-12-09 04:13:29.630624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:01.233 [2024-12-09 04:13:29.689106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.233 [2024-12-09 04:13:29.689167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.233 [2024-12-09 04:13:29.689189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.233 [2024-12-09 04:13:29.689200] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.233 [2024-12-09 04:13:29.689209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.233 [2024-12-09 04:13:29.690650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.233 [2024-12-09 04:13:29.690656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.233 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:01.233 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:01.233 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:01.233 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:01.233 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:01.492 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.492 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=310669 00:23:01.492 04:13:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:01.750 [2024-12-09 04:13:30.124118] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.750 04:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:02.008 Malloc0 00:23:02.008 04:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:02.266 04:13:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:02.523 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:02.780 [2024-12-09 04:13:31.312900] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.781 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:03.037 [2024-12-09 04:13:31.581640] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:03.037 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=310953 00:23:03.037 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:03.037 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:03.037 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 310953 /var/tmp/bdevperf.sock 00:23:03.037 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 310953 ']' 00:23:03.037 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:03.037 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.037 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:03.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:03.037 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.037 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:03.601 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.601 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:03.602 04:13:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:03.602 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:04.165 Nvme0n1 00:23:04.165 04:13:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:04.729 Nvme0n1 00:23:04.729 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:04.729 04:13:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:06.627 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:06.627 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:06.884 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:07.447 04:13:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:08.379 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:08.379 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:08.379 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.379 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:08.636 04:13:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.636 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:08.636 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.636 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:08.894 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:08.894 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:08.894 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.894 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:09.152 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:09.152 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:09.152 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:09.152 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:09.409 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:09.409 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:09.410 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:09.410 04:13:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:09.667 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:09.667 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:09.667 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:09.667 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:09.925 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:09.925 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:09.925 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:10.183 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:10.440 04:13:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:11.371 04:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:11.371 04:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:11.371 04:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.371 04:13:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:11.935 04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:11.935 04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:11.935 04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.935 04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:11.935 04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.935 04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:11.935 04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.935 04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:12.500 04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:12.500 04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:12.500 04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.500 04:13:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:12.758 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:12.758 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:12.758 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.758 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:13.014 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:13.014 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:13.014 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.014 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:13.272 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:13.272 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:13.272 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:13.529 04:13:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:13.787 04:13:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:14.720 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:14.720 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:14.720 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.720 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:14.978 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.978 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:14.978 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.978 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:15.235 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:15.235 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:15.235 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.235 04:13:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:15.492 04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.492 04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:15.492 04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.492 04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:15.750 04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:15.750 04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:15.750 04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:15.750 04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:16.007 04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.007 04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:16.007 04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.007 04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:16.264 04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.264 04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:16.264 04:13:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:16.522 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:17.084 04:13:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:18.029 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:18.029 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:18.030 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.030 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:18.287 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.287 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:18.287 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.287 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:18.544 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:18.544 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:18.544 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.544 04:13:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:18.802 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:18.803 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:18.803 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:18.803 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:19.061 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.061 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:19.061 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.061 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:19.319 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.319 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:19.319 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.319 04:13:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:19.577 04:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:19.577 04:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:19.577 04:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:19.835 04:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:20.093 04:13:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:21.026 04:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:21.026 04:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:21.026 04:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.026 04:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:21.592 04:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:21.592 04:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:21.592 04:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.592 04:13:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:21.592 04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:21.592 04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:21.592 04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:21.592 04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:22.159 04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.159 04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:22.159 04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.159 04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:22.159 04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.159 04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:22.159 04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.159 04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:22.417 04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:22.417 04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:22.417 04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.417 04:13:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:22.983 04:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:22.983 04:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:22.983 04:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:22.983 04:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:23.244 04:13:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:24.613 04:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:24.613 04:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:24.613 04:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.613 04:13:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:24.613 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:24.613 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:24.613 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.613 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:24.870 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:24.870 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:24.870 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:24.870 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:25.128 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.128 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:25.128 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.128 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:25.386 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.386 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:25.386 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.386 04:13:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:25.643 04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:25.643 04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:25.643 04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.643 04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:26.208 04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.208 04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:26.208 04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:26.208 04:13:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:26.465 04:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:27.028 04:13:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:27.959 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:27.959 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:27.959 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.959 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:28.216 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.217 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:28.217 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.217 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:28.474 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.474 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:28.474 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.474 04:13:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:28.732 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.732 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:28.732 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.732 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:28.990 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.990 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:28.990 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.990 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:29.248 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.248 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:29.248 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.248 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:29.506 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.506 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:29.506 04:13:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:29.764 04:13:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:30.021 04:13:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:31.393 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:31.393 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:31.393 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.393 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:31.393 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:31.393 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:31.393 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.393 04:13:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:31.651 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:31.651 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:31.651 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.651 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:31.909 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:31.909 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:31.909 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.909 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:32.167 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.167 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:32.167 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.167 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:32.425 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.425 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:32.425 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.425 04:14:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:32.682 04:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.682 04:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:32.682 04:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:32.939 04:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:33.196 04:14:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:34.585 04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:34.585 04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:34.585 04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.585 04:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:34.585 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.585 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:34.585 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.585 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:34.843 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.843 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:34.843 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.843 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:35.100 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.100 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:35.100 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.101 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:35.358 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.358 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:35.358 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.358 04:14:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:35.615 04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.616 04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:35.616 04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.616 04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:35.933 04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.933 04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:35.933 04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:36.191 04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:36.448 04:14:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:37.398 04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:37.398 04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:37.398 04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.398 04:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:37.963 04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.963 04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:37.963 04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.963 04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:37.963 04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:37.963 04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:37.963 04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.963 04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:38.528 04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.528 04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:38.528 04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.529 04:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:38.529 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.529 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:38.529 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.529 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:38.787 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:38.787 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:38.787 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.787 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:39.046 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:39.046 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 310953 00:23:39.046 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 310953 ']' 00:23:39.046 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 310953 00:23:39.046 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:39.046 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.046 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 310953 00:23:39.308 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:39.308 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:39.308 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 310953' 00:23:39.308 killing process with pid 310953 00:23:39.308 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 310953 00:23:39.308 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 310953 00:23:39.308 { 00:23:39.308 "results": [ 00:23:39.308 { 00:23:39.308 "job": "Nvme0n1", 00:23:39.308 "core_mask": "0x4", 00:23:39.308 "workload": "verify", 00:23:39.308 "status": "terminated", 00:23:39.308 "verify_range": { 00:23:39.308 "start": 0, 00:23:39.308 "length": 16384 00:23:39.308 }, 00:23:39.308 "queue_depth": 128, 00:23:39.308 "io_size": 4096, 00:23:39.308 "runtime": 34.383602, 00:23:39.308 "iops": 7962.807387079457, 00:23:39.308 "mibps": 31.10471635577913, 00:23:39.308 "io_failed": 0, 00:23:39.308 "io_timeout": 0, 00:23:39.308 "avg_latency_us": 16047.606952438542, 00:23:39.308 "min_latency_us": 588.6103703703703, 00:23:39.308 "max_latency_us": 4026531.84 00:23:39.308 } 00:23:39.308 ], 00:23:39.308 "core_count": 1 00:23:39.308 } 00:23:39.308 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 310953 00:23:39.308 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:39.308 [2024-12-09 04:13:31.648032] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:23:39.308 [2024-12-09 04:13:31.648128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310953 ] 00:23:39.308 [2024-12-09 04:13:31.715078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.308 [2024-12-09 04:13:31.772385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.308 Running I/O for 90 seconds... 00:23:39.308 8402.00 IOPS, 32.82 MiB/s [2024-12-09T03:14:07.884Z] 8476.00 IOPS, 33.11 MiB/s [2024-12-09T03:14:07.884Z] 8471.33 IOPS, 33.09 MiB/s [2024-12-09T03:14:07.884Z] 8452.00 IOPS, 33.02 MiB/s [2024-12-09T03:14:07.884Z] 8482.60 IOPS, 33.14 MiB/s [2024-12-09T03:14:07.884Z] 8486.00 IOPS, 33.15 MiB/s [2024-12-09T03:14:07.884Z] 8476.00 IOPS, 33.11 MiB/s [2024-12-09T03:14:07.884Z] 8470.25 IOPS, 33.09 MiB/s [2024-12-09T03:14:07.884Z] 8487.89 IOPS, 33.16 MiB/s [2024-12-09T03:14:07.884Z] 8489.90 IOPS, 33.16 MiB/s [2024-12-09T03:14:07.884Z] 8481.18 IOPS, 33.13 MiB/s [2024-12-09T03:14:07.884Z] 8464.08 IOPS, 33.06 MiB/s [2024-12-09T03:14:07.884Z] 8463.54 IOPS, 33.06 MiB/s [2024-12-09T03:14:07.884Z] 8457.57 IOPS, 33.04 MiB/s [2024-12-09T03:14:07.884Z] 8450.73 IOPS, 33.01 MiB/s [2024-12-09T03:14:07.884Z] [2024-12-09 04:13:48.271343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.308 [2024-12-09 04:13:48.271397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:39.308 [2024-12-09 04:13:48.271484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.308 [2024-12-09 04:13:48.271507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:39.308 [2024-12-09 04:13:48.271532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.308 [2024-12-09 04:13:48.271549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:39.308 [2024-12-09 04:13:48.271590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.308 [2024-12-09 04:13:48.271607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:39.308 [2024-12-09 04:13:48.271630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.308 [2024-12-09 04:13:48.271647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:39.308 [2024-12-09 04:13:48.271670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.308 [2024-12-09 04:13:48.271687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:39.308 [2024-12-09 04:13:48.271709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.308 [2024-12-09 04:13:48.271740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:39.308 [2024-12-09 04:13:48.271763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.308 [2024-12-09 04:13:48.271779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:39.308 [2024-12-09 04:13:48.271818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.308 [2024-12-09 04:13:48.271834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:39.308 [2024-12-09 04:13:48.271868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.308 [2024-12-09 04:13:48.271885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:39.308 [2024-12-09 04:13:48.271907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.308 [2024-12-09 04:13:48.271923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:39.308 [2024-12-09 04:13:48.271944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.308 [2024-12-09 04:13:48.271961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:39.308 [2024-12-09 04:13:48.271982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.308 [2024-12-09 04:13:48.271998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:39.308 [2024-12-09 04:13:48.272020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.272050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.272072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.272102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.272125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.272141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.272163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.272179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.272200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.272216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.272238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.272270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.272305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.272339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.272362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.272378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.272417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.272439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.272462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.272479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.272503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.272520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.272886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.272910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.272939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.272957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.272983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.273000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.309 [2024-12-09 04:13:48.273042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.309 [2024-12-09 04:13:48.273083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.309 [2024-12-09 04:13:48.273125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.309 [2024-12-09 04:13:48.273166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.309 [2024-12-09 04:13:48.273207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.309 [2024-12-09 04:13:48.273248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.309 [2024-12-09 04:13:48.273319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.273363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.273403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.273443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.273500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.273541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.273581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.273622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.273663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.273704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.273744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.273801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.273855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.273899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.273937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.273976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.273998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.274014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.274037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.274052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:39.309 [2024-12-09 04:13:48.274075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.309 [2024-12-09 04:13:48.274090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.274113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.274129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.274152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.274168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.274190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.274206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.274229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.274244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.274290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.274309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.274333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.274350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.274381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.274398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.274422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.274438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.274462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.274478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.274502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.274518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.274556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.274574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.274599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.274616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.274640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.274656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.274681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.274698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.274722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.274739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.274763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.274780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.274805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.274823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.274847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.274863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.274887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.274907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.275038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.275075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.275105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.275122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.275149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:99528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.275165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.275191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.275207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.275233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.275264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.275299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.275332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.275359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.275375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.275401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.275417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.275442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.275458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.275484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.275501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.275526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.275542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.275570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.275590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.275631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.275649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.275674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.275690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.275715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.275731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.275757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.275773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.275798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.310 [2024-12-09 04:13:48.275813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.275838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.310 [2024-12-09 04:13:48.275854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.275879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.310 [2024-12-09 04:13:48.275894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:39.310 [2024-12-09 04:13:48.275919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.310 [2024-12-09 04:13:48.275935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.275960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.275975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.311 [2024-12-09 04:13:48.276506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.276978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.276995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.277021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.277037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.277064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.277080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.277106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.277122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.277148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.277164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.277191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.277211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.277238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.277254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.277287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.277305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.277332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.277349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.277375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.277391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.277417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.277433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.277459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.277474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.277500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.277516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:39.311 [2024-12-09 04:13:48.277542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.311 [2024-12-09 04:13:48.277559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:39.311 7938.56 IOPS, 31.01 MiB/s [2024-12-09T03:14:07.887Z] 7471.59 IOPS, 29.19 MiB/s [2024-12-09T03:14:07.887Z] 7056.50 IOPS, 27.56 MiB/s [2024-12-09T03:14:07.887Z] 6685.11 IOPS, 26.11 MiB/s [2024-12-09T03:14:07.887Z] 6764.75 IOPS, 26.42 MiB/s [2024-12-09T03:14:07.887Z] 6842.43 IOPS, 26.73 MiB/s [2024-12-09T03:14:07.887Z] 6943.73 IOPS, 27.12 MiB/s [2024-12-09T03:14:07.887Z] 7128.57 IOPS, 27.85 MiB/s [2024-12-09T03:14:07.887Z] 7292.46 IOPS, 28.49 MiB/s [2024-12-09T03:14:07.887Z] 7438.92 IOPS, 29.06 MiB/s [2024-12-09T03:14:07.888Z] 7484.54 IOPS, 29.24 MiB/s [2024-12-09T03:14:07.888Z] 7522.67 IOPS, 29.39 MiB/s [2024-12-09T03:14:07.888Z] 7557.79 IOPS, 29.52 MiB/s [2024-12-09T03:14:07.888Z] 7630.93 IOPS, 29.81 MiB/s [2024-12-09T03:14:07.888Z] 7753.23 IOPS, 30.29 MiB/s [2024-12-09T03:14:07.888Z] 7857.32 IOPS, 30.69 MiB/s [2024-12-09T03:14:07.888Z] [2024-12-09 04:14:04.949405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:42048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.312 [2024-12-09 04:14:04.949476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.949549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:42064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.312 [2024-12-09 04:14:04.949571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.949620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.312 [2024-12-09 04:14:04.949638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.949662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:42096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.312 [2024-12-09 04:14:04.949680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.949704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.312 [2024-12-09 04:14:04.949736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.949761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.312 [2024-12-09 04:14:04.949792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.949815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:42144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.312 [2024-12-09 04:14:04.949830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.949851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.312 [2024-12-09 04:14:04.949867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.949887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.312 [2024-12-09 04:14:04.949902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.949923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.312 [2024-12-09 04:14:04.949938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.949960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:42208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.312 [2024-12-09 04:14:04.949975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.949996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.312 [2024-12-09 04:14:04.950012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.950033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.312 [2024-12-09 04:14:04.950049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.950070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:42256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.312 [2024-12-09 04:14:04.950085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.950111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.312 [2024-12-09 04:14:04.950127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.950149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:42288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.312 [2024-12-09 04:14:04.950165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.950186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.312 [2024-12-09 04:14:04.950202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.950461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.312 [2024-12-09 04:14:04.950484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.950513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.312 [2024-12-09 04:14:04.950531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.950555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.312 [2024-12-09 04:14:04.950575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.950598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.312 [2024-12-09 04:14:04.950615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.950637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:41448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.312 [2024-12-09 04:14:04.950654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.950677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.312 [2024-12-09 04:14:04.950693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.950716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:41512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.312 [2024-12-09 04:14:04.950733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.950755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:41544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.312 [2024-12-09 04:14:04.950772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.950795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:41576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.312 [2024-12-09 04:14:04.950827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.950851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:41600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.312 [2024-12-09 04:14:04.950872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.950895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:41632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.312 [2024-12-09 04:14:04.950911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.950933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:41664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.312 [2024-12-09 04:14:04.950950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.950972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.312 [2024-12-09 04:14:04.950988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:39.312 [2024-12-09 04:14:04.951010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.312 [2024-12-09 04:14:04.951026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:41728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:41760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:41792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:41856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:41888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:41920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:41952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:41608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:41672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:41704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.313 [2024-12-09 04:14:04.951653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:41736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:41768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:41800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:41864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:41928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.951960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:41960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.951976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.952014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.313 [2024-12-09 04:14:04.952030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.952068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.952085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.952108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:39.313 [2024-12-09 04:14:04.952124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.952147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.313 [2024-12-09 04:14:04.952163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.952185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.313 [2024-12-09 04:14:04.952202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.952224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.313 [2024-12-09 04:14:04.952240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.952281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.313 [2024-12-09 04:14:04.952300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.952322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.313 [2024-12-09 04:14:04.952339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.952362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.313 [2024-12-09 04:14:04.952378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:39.313 [2024-12-09 04:14:04.952406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:39.313 [2024-12-09 04:14:04.952423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:39.313 7933.78 IOPS, 30.99 MiB/s [2024-12-09T03:14:07.889Z] 7948.09 IOPS, 31.05 MiB/s [2024-12-09T03:14:07.889Z] 7961.38 IOPS, 31.10 MiB/s [2024-12-09T03:14:07.889Z] Received shutdown signal, test time was about 34.384464 seconds 00:23:39.313 00:23:39.313 Latency(us) 00:23:39.313 [2024-12-09T03:14:07.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.313 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:39.313 Verification LBA range: start 0x0 length 0x4000 00:23:39.313 Nvme0n1 : 34.38 7962.81 31.10 0.00 0.00 16047.61 588.61 4026531.84 00:23:39.313 [2024-12-09T03:14:07.889Z] =================================================================================================================== 00:23:39.313 [2024-12-09T03:14:07.889Z] Total : 7962.81 31.10 0.00 0.00 16047.61 588.61 4026531.84 00:23:39.313 04:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:39.880 rmmod nvme_tcp 00:23:39.880 rmmod nvme_fabrics 00:23:39.880 rmmod nvme_keyring 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 310669 ']' 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 310669 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 310669 ']' 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 310669 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 310669 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 310669' 00:23:39.880 killing process with pid 310669 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 310669 00:23:39.880 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 310669 00:23:40.140 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:40.140 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:40.140 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:40.140 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:23:40.140 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:23:40.140 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:40.140 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:23:40.140 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:40.140 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:40.140 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.140 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.140 04:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.046 04:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:42.046 00:23:42.046 real 0m43.463s 00:23:42.046 user 2m12.626s 00:23:42.046 sys 0m10.599s 00:23:42.046 04:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:42.046 04:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:42.046 ************************************ 00:23:42.046 END TEST nvmf_host_multipath_status 00:23:42.046 ************************************ 00:23:42.046 04:14:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:42.046 04:14:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:42.046 04:14:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:42.046 04:14:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.046 ************************************ 00:23:42.046 START TEST nvmf_discovery_remove_ifc 00:23:42.046 ************************************ 00:23:42.046 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:42.306 * Looking for test storage... 00:23:42.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:42.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.306 --rc genhtml_branch_coverage=1 00:23:42.306 --rc genhtml_function_coverage=1 00:23:42.306 --rc genhtml_legend=1 00:23:42.306 --rc geninfo_all_blocks=1 00:23:42.306 --rc geninfo_unexecuted_blocks=1 00:23:42.306 00:23:42.306 ' 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:42.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.306 --rc genhtml_branch_coverage=1 00:23:42.306 --rc genhtml_function_coverage=1 00:23:42.306 --rc genhtml_legend=1 00:23:42.306 --rc geninfo_all_blocks=1 00:23:42.306 --rc geninfo_unexecuted_blocks=1 00:23:42.306 00:23:42.306 ' 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:42.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.306 --rc genhtml_branch_coverage=1 00:23:42.306 --rc genhtml_function_coverage=1 00:23:42.306 --rc genhtml_legend=1 00:23:42.306 --rc geninfo_all_blocks=1 00:23:42.306 --rc geninfo_unexecuted_blocks=1 00:23:42.306 00:23:42.306 ' 00:23:42.306 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:42.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:42.307 --rc genhtml_branch_coverage=1 00:23:42.307 --rc genhtml_function_coverage=1 00:23:42.307 --rc genhtml_legend=1 00:23:42.307 --rc geninfo_all_blocks=1 00:23:42.307 --rc geninfo_unexecuted_blocks=1 00:23:42.307 00:23:42.307 ' 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:42.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:23:42.307 04:14:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:44.839 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:44.839 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:44.839 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:44.840 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:44.840 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:44.840 04:14:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:44.840 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:44.840 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:23:44.840 00:23:44.840 --- 10.0.0.2 ping statistics --- 00:23:44.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.840 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:44.840 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:44.840 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:23:44.840 00:23:44.840 --- 10.0.0.1 ping statistics --- 00:23:44.840 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:44.840 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=317420 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 317420 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 317420 ']' 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.840 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:44.840 [2024-12-09 04:14:13.261179] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:23:44.840 [2024-12-09 04:14:13.261247] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.840 [2024-12-09 04:14:13.331914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.840 [2024-12-09 04:14:13.387901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.840 [2024-12-09 04:14:13.387955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.840 [2024-12-09 04:14:13.387977] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.840 [2024-12-09 04:14:13.387988] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.840 [2024-12-09 04:14:13.387998] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.840 [2024-12-09 04:14:13.388634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.099 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.099 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:45.099 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:45.099 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:45.099 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:45.099 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.099 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:45.099 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.099 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:45.099 [2024-12-09 04:14:13.533903] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.099 [2024-12-09 04:14:13.542083] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:45.099 null0 00:23:45.099 [2024-12-09 04:14:13.574031] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.099 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.099 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=317449 00:23:45.099 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:45.099 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 317449 /tmp/host.sock 00:23:45.099 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 317449 ']' 00:23:45.099 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:45.099 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:45.099 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:45.099 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:45.099 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:45.099 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:45.099 [2024-12-09 04:14:13.642165] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:23:45.099 [2024-12-09 04:14:13.642257] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317449 ] 00:23:45.357 [2024-12-09 04:14:13.714312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.357 [2024-12-09 04:14:13.771314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.357 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.357 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:45.357 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:45.357 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:45.357 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.357 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:45.357 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.357 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:45.357 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.357 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:45.636 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.636 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:45.636 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.637 04:14:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:46.569 [2024-12-09 04:14:15.041388] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:46.569 [2024-12-09 04:14:15.041421] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:46.569 [2024-12-09 04:14:15.041445] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:46.569 [2024-12-09 04:14:15.127757] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:46.826 [2024-12-09 04:14:15.342949] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:46.826 [2024-12-09 04:14:15.344097] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xb4f650:1 started. 00:23:46.826 [2024-12-09 04:14:15.345817] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:46.826 [2024-12-09 04:14:15.345877] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:46.826 [2024-12-09 04:14:15.345914] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:46.826 [2024-12-09 04:14:15.345938] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:46.826 [2024-12-09 04:14:15.345978] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:46.826 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.826 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:46.826 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:46.826 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.826 [2024-12-09 04:14:15.350375] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xb4f650 was disconnected and freed. delete nvme_qpair. 00:23:46.826 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:46.826 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.826 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:46.826 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:46.826 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:46.826 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.826 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:46.826 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:46.826 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:47.083 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:47.083 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:47.083 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:47.083 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:47.083 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.083 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:47.083 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:47.083 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:47.083 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.083 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:47.083 04:14:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:48.011 04:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:48.011 04:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:48.011 04:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:48.011 04:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.011 04:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:48.011 04:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:48.011 04:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:48.011 04:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.011 04:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:48.011 04:14:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:49.386 04:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:49.386 04:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:49.386 04:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.386 04:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:49.386 04:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:49.386 04:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:49.386 04:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:49.386 04:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.386 04:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:49.386 04:14:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:50.468 04:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:50.468 04:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:50.468 04:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:50.468 04:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.468 04:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:50.468 04:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.468 04:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:50.468 04:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.468 04:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:50.468 04:14:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:51.157 04:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:51.157 04:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.157 04:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:51.157 04:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.157 04:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:51.157 04:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:51.157 04:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:51.157 04:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.157 04:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:51.157 04:14:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:52.176 04:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:52.176 04:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:52.176 04:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:52.176 04:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.176 04:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:52.176 04:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:52.176 04:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:52.176 04:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.176 04:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:52.176 04:14:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:52.435 [2024-12-09 04:14:20.787192] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:52.435 [2024-12-09 04:14:20.787288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.435 [2024-12-09 04:14:20.787320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.435 [2024-12-09 04:14:20.787338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.435 [2024-12-09 04:14:20.787351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.435 [2024-12-09 04:14:20.787364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.435 [2024-12-09 04:14:20.787378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.435 [2024-12-09 04:14:20.787392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.435 [2024-12-09 04:14:20.787405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.435 [2024-12-09 04:14:20.787419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:52.435 [2024-12-09 04:14:20.787432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:52.435 [2024-12-09 04:14:20.787456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2be90 is same with the state(6) to be set 00:23:52.435 [2024-12-09 04:14:20.797212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2be90 (9): Bad file descriptor 00:23:52.435 [2024-12-09 04:14:20.807267] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:52.435 [2024-12-09 04:14:20.807297] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:52.435 [2024-12-09 04:14:20.807311] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:52.435 [2024-12-09 04:14:20.807326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:52.435 [2024-12-09 04:14:20.807376] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:53.367 04:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:53.367 04:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:53.367 04:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:53.367 04:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:53.367 04:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:53.367 04:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:53.367 04:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:53.367 [2024-12-09 04:14:21.817299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:53.367 [2024-12-09 04:14:21.817344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb2be90 with addr=10.0.0.2, port=4420 00:23:53.367 [2024-12-09 04:14:21.817363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb2be90 is same with the state(6) to be set 00:23:53.367 [2024-12-09 04:14:21.817391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2be90 (9): Bad file descriptor 00:23:53.367 [2024-12-09 04:14:21.817818] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:23:53.367 [2024-12-09 04:14:21.817855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:53.367 [2024-12-09 04:14:21.817871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:53.367 [2024-12-09 04:14:21.817887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:53.367 [2024-12-09 04:14:21.817901] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:53.367 [2024-12-09 04:14:21.817911] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:53.367 [2024-12-09 04:14:21.817919] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:53.367 [2024-12-09 04:14:21.817932] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:53.367 [2024-12-09 04:14:21.817941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:53.367 04:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:53.367 04:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:53.367 04:14:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:54.301 [2024-12-09 04:14:22.820438] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:54.301 [2024-12-09 04:14:22.820503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:54.301 [2024-12-09 04:14:22.820538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:54.301 [2024-12-09 04:14:22.820567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:54.301 [2024-12-09 04:14:22.820581] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:23:54.301 [2024-12-09 04:14:22.820595] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:54.301 [2024-12-09 04:14:22.820632] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:54.301 [2024-12-09 04:14:22.820640] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:54.301 [2024-12-09 04:14:22.820702] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:54.301 [2024-12-09 04:14:22.820781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.301 [2024-12-09 04:14:22.820804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.301 [2024-12-09 04:14:22.820823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.301 [2024-12-09 04:14:22.820837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.301 [2024-12-09 04:14:22.820853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.301 [2024-12-09 04:14:22.820867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.301 [2024-12-09 04:14:22.820881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.301 [2024-12-09 04:14:22.820894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.301 [2024-12-09 04:14:22.820908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.301 [2024-12-09 04:14:22.820920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.301 [2024-12-09 04:14:22.820933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:23:54.301 [2024-12-09 04:14:22.820990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1b5e0 (9): Bad file descriptor 00:23:54.301 [2024-12-09 04:14:22.821975] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:54.301 [2024-12-09 04:14:22.821996] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:23:54.301 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:54.301 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.301 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:54.301 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.301 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:54.301 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:54.301 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:54.301 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.560 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:54.560 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:54.560 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:54.560 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:54.560 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:54.560 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.560 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:54.560 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.560 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:54.560 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:54.560 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:54.560 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.560 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:54.560 04:14:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:55.494 04:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:55.494 04:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.494 04:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:55.494 04:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.494 04:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:55.494 04:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:55.494 04:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:55.494 04:14:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.494 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:55.494 04:14:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:56.429 [2024-12-09 04:14:24.877914] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:56.429 [2024-12-09 04:14:24.877949] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:56.429 [2024-12-09 04:14:24.877973] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:56.685 [2024-12-09 04:14:25.007368] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:56.685 04:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:56.685 04:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.685 04:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:56.685 04:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.685 04:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:56.685 04:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:56.685 04:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:56.685 04:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.685 04:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:56.685 04:14:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:56.685 [2024-12-09 04:14:25.229585] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:23:56.686 [2024-12-09 04:14:25.230429] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xb58f60:1 started. 00:23:56.686 [2024-12-09 04:14:25.231791] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:56.686 [2024-12-09 04:14:25.231834] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:56.686 [2024-12-09 04:14:25.231864] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:56.686 [2024-12-09 04:14:25.231887] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:56.686 [2024-12-09 04:14:25.231901] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:56.686 [2024-12-09 04:14:25.236483] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xb58f60 was disconnected and freed. delete nvme_qpair. 00:23:57.616 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:57.616 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.616 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:57.616 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.616 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:57.616 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:57.616 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:57.616 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.616 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:57.616 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:57.616 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 317449 00:23:57.616 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 317449 ']' 00:23:57.616 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 317449 00:23:57.616 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:57.616 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:57.616 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317449 00:23:57.616 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:57.616 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:57.616 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317449' 00:23:57.616 killing process with pid 317449 00:23:57.616 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 317449 00:23:57.616 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 317449 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:57.873 rmmod nvme_tcp 00:23:57.873 rmmod nvme_fabrics 00:23:57.873 rmmod nvme_keyring 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 317420 ']' 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 317420 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 317420 ']' 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 317420 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317420 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317420' 00:23:57.873 killing process with pid 317420 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 317420 00:23:57.873 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 317420 00:23:58.130 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:58.130 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:58.130 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:58.130 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:23:58.130 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:23:58.130 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:58.130 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:23:58.130 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:58.130 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:58.130 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.130 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.130 04:14:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:00.664 00:24:00.664 real 0m18.141s 00:24:00.664 user 0m26.092s 00:24:00.664 sys 0m3.108s 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:00.664 ************************************ 00:24:00.664 END TEST nvmf_discovery_remove_ifc 00:24:00.664 ************************************ 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.664 ************************************ 00:24:00.664 START TEST nvmf_identify_kernel_target 00:24:00.664 ************************************ 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:00.664 * Looking for test storage... 00:24:00.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:00.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.664 --rc genhtml_branch_coverage=1 00:24:00.664 --rc genhtml_function_coverage=1 00:24:00.664 --rc genhtml_legend=1 00:24:00.664 --rc geninfo_all_blocks=1 00:24:00.664 --rc geninfo_unexecuted_blocks=1 00:24:00.664 00:24:00.664 ' 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:00.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.664 --rc genhtml_branch_coverage=1 00:24:00.664 --rc genhtml_function_coverage=1 00:24:00.664 --rc genhtml_legend=1 00:24:00.664 --rc geninfo_all_blocks=1 00:24:00.664 --rc geninfo_unexecuted_blocks=1 00:24:00.664 00:24:00.664 ' 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:00.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.664 --rc genhtml_branch_coverage=1 00:24:00.664 --rc genhtml_function_coverage=1 00:24:00.664 --rc genhtml_legend=1 00:24:00.664 --rc geninfo_all_blocks=1 00:24:00.664 --rc geninfo_unexecuted_blocks=1 00:24:00.664 00:24:00.664 ' 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:00.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:00.664 --rc genhtml_branch_coverage=1 00:24:00.664 --rc genhtml_function_coverage=1 00:24:00.664 --rc genhtml_legend=1 00:24:00.664 --rc geninfo_all_blocks=1 00:24:00.664 --rc geninfo_unexecuted_blocks=1 00:24:00.664 00:24:00.664 ' 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.664 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:00.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:24:00.665 04:14:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:02.568 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:02.568 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:02.568 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:02.568 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:02.568 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:02.569 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:02.569 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:02.569 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:02.569 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:02.569 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:02.569 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:02.569 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:02.569 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:02.828 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:02.828 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:02.828 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:02.828 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:02.828 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:02.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:02.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:24:02.829 00:24:02.829 --- 10.0.0.2 ping statistics --- 00:24:02.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.829 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:02.829 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:02.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:24:02.829 00:24:02.829 --- 10.0.0.1 ping statistics --- 00:24:02.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:02.829 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:02.829 04:14:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:04.203 Waiting for block devices as requested 00:24:04.203 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:04.203 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:04.204 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:04.461 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:04.461 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:04.461 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:04.461 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:04.720 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:04.720 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:04.720 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:04.979 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:04.979 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:04.979 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:04.979 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:05.237 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:05.237 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:05.237 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:05.496 No valid GPT data, bailing 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:05.496 04:14:33 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:24:05.496 00:24:05.496 Discovery Log Number of Records 2, Generation counter 2 00:24:05.496 =====Discovery Log Entry 0====== 00:24:05.496 trtype: tcp 00:24:05.496 adrfam: ipv4 00:24:05.496 subtype: current discovery subsystem 00:24:05.496 treq: not specified, sq flow control disable supported 00:24:05.496 portid: 1 00:24:05.496 trsvcid: 4420 00:24:05.496 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:05.496 traddr: 10.0.0.1 00:24:05.496 eflags: none 00:24:05.496 sectype: none 00:24:05.496 =====Discovery Log Entry 1====== 00:24:05.496 trtype: tcp 00:24:05.496 adrfam: ipv4 00:24:05.496 subtype: nvme subsystem 00:24:05.496 treq: not specified, sq flow control disable supported 00:24:05.496 portid: 1 00:24:05.496 trsvcid: 4420 00:24:05.496 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:05.496 traddr: 10.0.0.1 00:24:05.496 eflags: none 00:24:05.496 sectype: none 00:24:05.496 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:05.496 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:05.757 ===================================================== 00:24:05.757 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:05.757 ===================================================== 00:24:05.757 Controller Capabilities/Features 00:24:05.757 ================================ 00:24:05.757 Vendor ID: 0000 00:24:05.757 Subsystem Vendor ID: 0000 00:24:05.757 Serial Number: bc9bbce0182711bd5c2b 00:24:05.757 Model Number: Linux 00:24:05.757 Firmware Version: 6.8.9-20 00:24:05.757 Recommended Arb Burst: 0 00:24:05.757 IEEE OUI Identifier: 00 00 00 00:24:05.757 Multi-path I/O 00:24:05.757 May have multiple subsystem ports: No 00:24:05.757 May have multiple controllers: No 00:24:05.757 Associated with SR-IOV VF: No 00:24:05.757 Max Data Transfer Size: Unlimited 00:24:05.757 Max Number of Namespaces: 0 00:24:05.757 Max Number of I/O Queues: 1024 00:24:05.757 NVMe Specification Version (VS): 1.3 00:24:05.757 NVMe Specification Version (Identify): 1.3 00:24:05.757 Maximum Queue Entries: 1024 00:24:05.757 Contiguous Queues Required: No 00:24:05.757 Arbitration Mechanisms Supported 00:24:05.757 Weighted Round Robin: Not Supported 00:24:05.757 Vendor Specific: Not Supported 00:24:05.757 Reset Timeout: 7500 ms 00:24:05.757 Doorbell Stride: 4 bytes 00:24:05.757 NVM Subsystem Reset: Not Supported 00:24:05.757 Command Sets Supported 00:24:05.757 NVM Command Set: Supported 00:24:05.757 Boot Partition: Not Supported 00:24:05.757 Memory Page Size Minimum: 4096 bytes 00:24:05.757 Memory Page Size Maximum: 4096 bytes 00:24:05.757 Persistent Memory Region: Not Supported 00:24:05.757 Optional Asynchronous Events Supported 00:24:05.757 Namespace Attribute Notices: Not Supported 00:24:05.757 Firmware Activation Notices: Not Supported 00:24:05.757 ANA Change Notices: Not Supported 00:24:05.757 PLE Aggregate Log Change Notices: Not Supported 00:24:05.757 LBA Status Info Alert Notices: Not Supported 00:24:05.757 EGE Aggregate Log Change Notices: Not Supported 00:24:05.757 Normal NVM Subsystem Shutdown event: Not Supported 00:24:05.757 Zone Descriptor Change Notices: Not Supported 00:24:05.757 Discovery Log Change Notices: Supported 00:24:05.757 Controller Attributes 00:24:05.757 128-bit Host Identifier: Not Supported 00:24:05.757 Non-Operational Permissive Mode: Not Supported 00:24:05.757 NVM Sets: Not Supported 00:24:05.757 Read Recovery Levels: Not Supported 00:24:05.757 Endurance Groups: Not Supported 00:24:05.757 Predictable Latency Mode: Not Supported 00:24:05.757 Traffic Based Keep ALive: Not Supported 00:24:05.757 Namespace Granularity: Not Supported 00:24:05.757 SQ Associations: Not Supported 00:24:05.757 UUID List: Not Supported 00:24:05.757 Multi-Domain Subsystem: Not Supported 00:24:05.757 Fixed Capacity Management: Not Supported 00:24:05.757 Variable Capacity Management: Not Supported 00:24:05.757 Delete Endurance Group: Not Supported 00:24:05.757 Delete NVM Set: Not Supported 00:24:05.757 Extended LBA Formats Supported: Not Supported 00:24:05.757 Flexible Data Placement Supported: Not Supported 00:24:05.757 00:24:05.757 Controller Memory Buffer Support 00:24:05.757 ================================ 00:24:05.757 Supported: No 00:24:05.757 00:24:05.757 Persistent Memory Region Support 00:24:05.757 ================================ 00:24:05.757 Supported: No 00:24:05.757 00:24:05.757 Admin Command Set Attributes 00:24:05.757 ============================ 00:24:05.757 Security Send/Receive: Not Supported 00:24:05.757 Format NVM: Not Supported 00:24:05.757 Firmware Activate/Download: Not Supported 00:24:05.757 Namespace Management: Not Supported 00:24:05.757 Device Self-Test: Not Supported 00:24:05.757 Directives: Not Supported 00:24:05.757 NVMe-MI: Not Supported 00:24:05.757 Virtualization Management: Not Supported 00:24:05.757 Doorbell Buffer Config: Not Supported 00:24:05.757 Get LBA Status Capability: Not Supported 00:24:05.757 Command & Feature Lockdown Capability: Not Supported 00:24:05.757 Abort Command Limit: 1 00:24:05.757 Async Event Request Limit: 1 00:24:05.757 Number of Firmware Slots: N/A 00:24:05.757 Firmware Slot 1 Read-Only: N/A 00:24:05.757 Firmware Activation Without Reset: N/A 00:24:05.757 Multiple Update Detection Support: N/A 00:24:05.757 Firmware Update Granularity: No Information Provided 00:24:05.757 Per-Namespace SMART Log: No 00:24:05.757 Asymmetric Namespace Access Log Page: Not Supported 00:24:05.757 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:05.757 Command Effects Log Page: Not Supported 00:24:05.757 Get Log Page Extended Data: Supported 00:24:05.757 Telemetry Log Pages: Not Supported 00:24:05.757 Persistent Event Log Pages: Not Supported 00:24:05.757 Supported Log Pages Log Page: May Support 00:24:05.758 Commands Supported & Effects Log Page: Not Supported 00:24:05.758 Feature Identifiers & Effects Log Page:May Support 00:24:05.758 NVMe-MI Commands & Effects Log Page: May Support 00:24:05.758 Data Area 4 for Telemetry Log: Not Supported 00:24:05.758 Error Log Page Entries Supported: 1 00:24:05.758 Keep Alive: Not Supported 00:24:05.758 00:24:05.758 NVM Command Set Attributes 00:24:05.758 ========================== 00:24:05.758 Submission Queue Entry Size 00:24:05.758 Max: 1 00:24:05.758 Min: 1 00:24:05.758 Completion Queue Entry Size 00:24:05.758 Max: 1 00:24:05.758 Min: 1 00:24:05.758 Number of Namespaces: 0 00:24:05.758 Compare Command: Not Supported 00:24:05.758 Write Uncorrectable Command: Not Supported 00:24:05.758 Dataset Management Command: Not Supported 00:24:05.758 Write Zeroes Command: Not Supported 00:24:05.758 Set Features Save Field: Not Supported 00:24:05.758 Reservations: Not Supported 00:24:05.758 Timestamp: Not Supported 00:24:05.758 Copy: Not Supported 00:24:05.758 Volatile Write Cache: Not Present 00:24:05.758 Atomic Write Unit (Normal): 1 00:24:05.758 Atomic Write Unit (PFail): 1 00:24:05.758 Atomic Compare & Write Unit: 1 00:24:05.758 Fused Compare & Write: Not Supported 00:24:05.758 Scatter-Gather List 00:24:05.758 SGL Command Set: Supported 00:24:05.758 SGL Keyed: Not Supported 00:24:05.758 SGL Bit Bucket Descriptor: Not Supported 00:24:05.758 SGL Metadata Pointer: Not Supported 00:24:05.758 Oversized SGL: Not Supported 00:24:05.758 SGL Metadata Address: Not Supported 00:24:05.758 SGL Offset: Supported 00:24:05.758 Transport SGL Data Block: Not Supported 00:24:05.758 Replay Protected Memory Block: Not Supported 00:24:05.758 00:24:05.758 Firmware Slot Information 00:24:05.758 ========================= 00:24:05.758 Active slot: 0 00:24:05.758 00:24:05.758 00:24:05.758 Error Log 00:24:05.758 ========= 00:24:05.758 00:24:05.758 Active Namespaces 00:24:05.758 ================= 00:24:05.758 Discovery Log Page 00:24:05.758 ================== 00:24:05.758 Generation Counter: 2 00:24:05.758 Number of Records: 2 00:24:05.758 Record Format: 0 00:24:05.758 00:24:05.758 Discovery Log Entry 0 00:24:05.758 ---------------------- 00:24:05.758 Transport Type: 3 (TCP) 00:24:05.758 Address Family: 1 (IPv4) 00:24:05.758 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:05.758 Entry Flags: 00:24:05.758 Duplicate Returned Information: 0 00:24:05.758 Explicit Persistent Connection Support for Discovery: 0 00:24:05.758 Transport Requirements: 00:24:05.758 Secure Channel: Not Specified 00:24:05.758 Port ID: 1 (0x0001) 00:24:05.758 Controller ID: 65535 (0xffff) 00:24:05.758 Admin Max SQ Size: 32 00:24:05.758 Transport Service Identifier: 4420 00:24:05.758 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:05.758 Transport Address: 10.0.0.1 00:24:05.758 Discovery Log Entry 1 00:24:05.758 ---------------------- 00:24:05.758 Transport Type: 3 (TCP) 00:24:05.758 Address Family: 1 (IPv4) 00:24:05.758 Subsystem Type: 2 (NVM Subsystem) 00:24:05.758 Entry Flags: 00:24:05.758 Duplicate Returned Information: 0 00:24:05.758 Explicit Persistent Connection Support for Discovery: 0 00:24:05.758 Transport Requirements: 00:24:05.758 Secure Channel: Not Specified 00:24:05.758 Port ID: 1 (0x0001) 00:24:05.758 Controller ID: 65535 (0xffff) 00:24:05.758 Admin Max SQ Size: 32 00:24:05.758 Transport Service Identifier: 4420 00:24:05.758 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:05.758 Transport Address: 10.0.0.1 00:24:05.758 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:05.758 get_feature(0x01) failed 00:24:05.758 get_feature(0x02) failed 00:24:05.758 get_feature(0x04) failed 00:24:05.758 ===================================================== 00:24:05.758 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:05.758 ===================================================== 00:24:05.758 Controller Capabilities/Features 00:24:05.758 ================================ 00:24:05.758 Vendor ID: 0000 00:24:05.758 Subsystem Vendor ID: 0000 00:24:05.758 Serial Number: 4d4edf5658b0b299facd 00:24:05.758 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:05.758 Firmware Version: 6.8.9-20 00:24:05.758 Recommended Arb Burst: 6 00:24:05.758 IEEE OUI Identifier: 00 00 00 00:24:05.758 Multi-path I/O 00:24:05.758 May have multiple subsystem ports: Yes 00:24:05.758 May have multiple controllers: Yes 00:24:05.758 Associated with SR-IOV VF: No 00:24:05.758 Max Data Transfer Size: Unlimited 00:24:05.758 Max Number of Namespaces: 1024 00:24:05.758 Max Number of I/O Queues: 128 00:24:05.758 NVMe Specification Version (VS): 1.3 00:24:05.758 NVMe Specification Version (Identify): 1.3 00:24:05.758 Maximum Queue Entries: 1024 00:24:05.758 Contiguous Queues Required: No 00:24:05.758 Arbitration Mechanisms Supported 00:24:05.758 Weighted Round Robin: Not Supported 00:24:05.758 Vendor Specific: Not Supported 00:24:05.758 Reset Timeout: 7500 ms 00:24:05.758 Doorbell Stride: 4 bytes 00:24:05.758 NVM Subsystem Reset: Not Supported 00:24:05.758 Command Sets Supported 00:24:05.758 NVM Command Set: Supported 00:24:05.758 Boot Partition: Not Supported 00:24:05.758 Memory Page Size Minimum: 4096 bytes 00:24:05.758 Memory Page Size Maximum: 4096 bytes 00:24:05.758 Persistent Memory Region: Not Supported 00:24:05.758 Optional Asynchronous Events Supported 00:24:05.758 Namespace Attribute Notices: Supported 00:24:05.758 Firmware Activation Notices: Not Supported 00:24:05.758 ANA Change Notices: Supported 00:24:05.758 PLE Aggregate Log Change Notices: Not Supported 00:24:05.758 LBA Status Info Alert Notices: Not Supported 00:24:05.758 EGE Aggregate Log Change Notices: Not Supported 00:24:05.758 Normal NVM Subsystem Shutdown event: Not Supported 00:24:05.758 Zone Descriptor Change Notices: Not Supported 00:24:05.758 Discovery Log Change Notices: Not Supported 00:24:05.758 Controller Attributes 00:24:05.758 128-bit Host Identifier: Supported 00:24:05.758 Non-Operational Permissive Mode: Not Supported 00:24:05.758 NVM Sets: Not Supported 00:24:05.758 Read Recovery Levels: Not Supported 00:24:05.758 Endurance Groups: Not Supported 00:24:05.758 Predictable Latency Mode: Not Supported 00:24:05.758 Traffic Based Keep ALive: Supported 00:24:05.758 Namespace Granularity: Not Supported 00:24:05.758 SQ Associations: Not Supported 00:24:05.758 UUID List: Not Supported 00:24:05.758 Multi-Domain Subsystem: Not Supported 00:24:05.758 Fixed Capacity Management: Not Supported 00:24:05.758 Variable Capacity Management: Not Supported 00:24:05.758 Delete Endurance Group: Not Supported 00:24:05.758 Delete NVM Set: Not Supported 00:24:05.758 Extended LBA Formats Supported: Not Supported 00:24:05.758 Flexible Data Placement Supported: Not Supported 00:24:05.758 00:24:05.758 Controller Memory Buffer Support 00:24:05.758 ================================ 00:24:05.758 Supported: No 00:24:05.758 00:24:05.758 Persistent Memory Region Support 00:24:05.758 ================================ 00:24:05.758 Supported: No 00:24:05.758 00:24:05.758 Admin Command Set Attributes 00:24:05.758 ============================ 00:24:05.758 Security Send/Receive: Not Supported 00:24:05.758 Format NVM: Not Supported 00:24:05.758 Firmware Activate/Download: Not Supported 00:24:05.758 Namespace Management: Not Supported 00:24:05.758 Device Self-Test: Not Supported 00:24:05.758 Directives: Not Supported 00:24:05.758 NVMe-MI: Not Supported 00:24:05.758 Virtualization Management: Not Supported 00:24:05.758 Doorbell Buffer Config: Not Supported 00:24:05.758 Get LBA Status Capability: Not Supported 00:24:05.758 Command & Feature Lockdown Capability: Not Supported 00:24:05.758 Abort Command Limit: 4 00:24:05.758 Async Event Request Limit: 4 00:24:05.758 Number of Firmware Slots: N/A 00:24:05.758 Firmware Slot 1 Read-Only: N/A 00:24:05.758 Firmware Activation Without Reset: N/A 00:24:05.758 Multiple Update Detection Support: N/A 00:24:05.758 Firmware Update Granularity: No Information Provided 00:24:05.758 Per-Namespace SMART Log: Yes 00:24:05.758 Asymmetric Namespace Access Log Page: Supported 00:24:05.758 ANA Transition Time : 10 sec 00:24:05.758 00:24:05.758 Asymmetric Namespace Access Capabilities 00:24:05.758 ANA Optimized State : Supported 00:24:05.758 ANA Non-Optimized State : Supported 00:24:05.758 ANA Inaccessible State : Supported 00:24:05.758 ANA Persistent Loss State : Supported 00:24:05.758 ANA Change State : Supported 00:24:05.758 ANAGRPID is not changed : No 00:24:05.758 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:05.758 00:24:05.758 ANA Group Identifier Maximum : 128 00:24:05.758 Number of ANA Group Identifiers : 128 00:24:05.759 Max Number of Allowed Namespaces : 1024 00:24:05.759 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:05.759 Command Effects Log Page: Supported 00:24:05.759 Get Log Page Extended Data: Supported 00:24:05.759 Telemetry Log Pages: Not Supported 00:24:05.759 Persistent Event Log Pages: Not Supported 00:24:05.759 Supported Log Pages Log Page: May Support 00:24:05.759 Commands Supported & Effects Log Page: Not Supported 00:24:05.759 Feature Identifiers & Effects Log Page:May Support 00:24:05.759 NVMe-MI Commands & Effects Log Page: May Support 00:24:05.759 Data Area 4 for Telemetry Log: Not Supported 00:24:05.759 Error Log Page Entries Supported: 128 00:24:05.759 Keep Alive: Supported 00:24:05.759 Keep Alive Granularity: 1000 ms 00:24:05.759 00:24:05.759 NVM Command Set Attributes 00:24:05.759 ========================== 00:24:05.759 Submission Queue Entry Size 00:24:05.759 Max: 64 00:24:05.759 Min: 64 00:24:05.759 Completion Queue Entry Size 00:24:05.759 Max: 16 00:24:05.759 Min: 16 00:24:05.759 Number of Namespaces: 1024 00:24:05.759 Compare Command: Not Supported 00:24:05.759 Write Uncorrectable Command: Not Supported 00:24:05.759 Dataset Management Command: Supported 00:24:05.759 Write Zeroes Command: Supported 00:24:05.759 Set Features Save Field: Not Supported 00:24:05.759 Reservations: Not Supported 00:24:05.759 Timestamp: Not Supported 00:24:05.759 Copy: Not Supported 00:24:05.759 Volatile Write Cache: Present 00:24:05.759 Atomic Write Unit (Normal): 1 00:24:05.759 Atomic Write Unit (PFail): 1 00:24:05.759 Atomic Compare & Write Unit: 1 00:24:05.759 Fused Compare & Write: Not Supported 00:24:05.759 Scatter-Gather List 00:24:05.759 SGL Command Set: Supported 00:24:05.759 SGL Keyed: Not Supported 00:24:05.759 SGL Bit Bucket Descriptor: Not Supported 00:24:05.759 SGL Metadata Pointer: Not Supported 00:24:05.759 Oversized SGL: Not Supported 00:24:05.759 SGL Metadata Address: Not Supported 00:24:05.759 SGL Offset: Supported 00:24:05.759 Transport SGL Data Block: Not Supported 00:24:05.759 Replay Protected Memory Block: Not Supported 00:24:05.759 00:24:05.759 Firmware Slot Information 00:24:05.759 ========================= 00:24:05.759 Active slot: 0 00:24:05.759 00:24:05.759 Asymmetric Namespace Access 00:24:05.759 =========================== 00:24:05.759 Change Count : 0 00:24:05.759 Number of ANA Group Descriptors : 1 00:24:05.759 ANA Group Descriptor : 0 00:24:05.759 ANA Group ID : 1 00:24:05.759 Number of NSID Values : 1 00:24:05.759 Change Count : 0 00:24:05.759 ANA State : 1 00:24:05.759 Namespace Identifier : 1 00:24:05.759 00:24:05.759 Commands Supported and Effects 00:24:05.759 ============================== 00:24:05.759 Admin Commands 00:24:05.759 -------------- 00:24:05.759 Get Log Page (02h): Supported 00:24:05.759 Identify (06h): Supported 00:24:05.759 Abort (08h): Supported 00:24:05.759 Set Features (09h): Supported 00:24:05.759 Get Features (0Ah): Supported 00:24:05.759 Asynchronous Event Request (0Ch): Supported 00:24:05.759 Keep Alive (18h): Supported 00:24:05.759 I/O Commands 00:24:05.759 ------------ 00:24:05.759 Flush (00h): Supported 00:24:05.759 Write (01h): Supported LBA-Change 00:24:05.759 Read (02h): Supported 00:24:05.759 Write Zeroes (08h): Supported LBA-Change 00:24:05.759 Dataset Management (09h): Supported 00:24:05.759 00:24:05.759 Error Log 00:24:05.759 ========= 00:24:05.759 Entry: 0 00:24:05.759 Error Count: 0x3 00:24:05.759 Submission Queue Id: 0x0 00:24:05.759 Command Id: 0x5 00:24:05.759 Phase Bit: 0 00:24:05.759 Status Code: 0x2 00:24:05.759 Status Code Type: 0x0 00:24:05.759 Do Not Retry: 1 00:24:05.759 Error Location: 0x28 00:24:05.759 LBA: 0x0 00:24:05.759 Namespace: 0x0 00:24:05.759 Vendor Log Page: 0x0 00:24:05.759 ----------- 00:24:05.759 Entry: 1 00:24:05.759 Error Count: 0x2 00:24:05.759 Submission Queue Id: 0x0 00:24:05.759 Command Id: 0x5 00:24:05.759 Phase Bit: 0 00:24:05.759 Status Code: 0x2 00:24:05.759 Status Code Type: 0x0 00:24:05.759 Do Not Retry: 1 00:24:05.759 Error Location: 0x28 00:24:05.759 LBA: 0x0 00:24:05.759 Namespace: 0x0 00:24:05.759 Vendor Log Page: 0x0 00:24:05.759 ----------- 00:24:05.759 Entry: 2 00:24:05.759 Error Count: 0x1 00:24:05.759 Submission Queue Id: 0x0 00:24:05.759 Command Id: 0x4 00:24:05.759 Phase Bit: 0 00:24:05.759 Status Code: 0x2 00:24:05.759 Status Code Type: 0x0 00:24:05.759 Do Not Retry: 1 00:24:05.759 Error Location: 0x28 00:24:05.759 LBA: 0x0 00:24:05.759 Namespace: 0x0 00:24:05.759 Vendor Log Page: 0x0 00:24:05.759 00:24:05.759 Number of Queues 00:24:05.759 ================ 00:24:05.759 Number of I/O Submission Queues: 128 00:24:05.759 Number of I/O Completion Queues: 128 00:24:05.759 00:24:05.759 ZNS Specific Controller Data 00:24:05.759 ============================ 00:24:05.759 Zone Append Size Limit: 0 00:24:05.759 00:24:05.759 00:24:05.759 Active Namespaces 00:24:05.759 ================= 00:24:05.759 get_feature(0x05) failed 00:24:05.759 Namespace ID:1 00:24:05.759 Command Set Identifier: NVM (00h) 00:24:05.759 Deallocate: Supported 00:24:05.759 Deallocated/Unwritten Error: Not Supported 00:24:05.759 Deallocated Read Value: Unknown 00:24:05.759 Deallocate in Write Zeroes: Not Supported 00:24:05.759 Deallocated Guard Field: 0xFFFF 00:24:05.759 Flush: Supported 00:24:05.759 Reservation: Not Supported 00:24:05.759 Namespace Sharing Capabilities: Multiple Controllers 00:24:05.759 Size (in LBAs): 1953525168 (931GiB) 00:24:05.759 Capacity (in LBAs): 1953525168 (931GiB) 00:24:05.759 Utilization (in LBAs): 1953525168 (931GiB) 00:24:05.759 UUID: a5e21125-6bb7-4879-a751-d78ff456168f 00:24:05.759 Thin Provisioning: Not Supported 00:24:05.759 Per-NS Atomic Units: Yes 00:24:05.759 Atomic Boundary Size (Normal): 0 00:24:05.759 Atomic Boundary Size (PFail): 0 00:24:05.759 Atomic Boundary Offset: 0 00:24:05.759 NGUID/EUI64 Never Reused: No 00:24:05.759 ANA group ID: 1 00:24:05.759 Namespace Write Protected: No 00:24:05.759 Number of LBA Formats: 1 00:24:05.759 Current LBA Format: LBA Format #00 00:24:05.759 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:05.759 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:05.759 rmmod nvme_tcp 00:24:05.759 rmmod nvme_fabrics 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:05.759 04:14:34 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.324 04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:08.324 04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:08.324 04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:08.324 04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:24:08.324 04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:08.324 04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:08.324 04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:08.324 04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:08.324 04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:08.324 04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:08.324 04:14:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:09.261 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:09.261 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:09.261 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:09.261 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:09.261 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:09.261 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:09.261 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:09.261 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:09.261 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:09.261 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:09.261 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:09.261 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:09.261 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:09.261 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:09.261 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:09.261 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:10.197 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:10.197 00:24:10.197 real 0m9.991s 00:24:10.197 user 0m2.214s 00:24:10.197 sys 0m3.788s 00:24:10.197 04:14:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:10.197 04:14:38 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:10.197 ************************************ 00:24:10.197 END TEST nvmf_identify_kernel_target 00:24:10.197 ************************************ 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.456 ************************************ 00:24:10.456 START TEST nvmf_auth_host 00:24:10.456 ************************************ 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:10.456 * Looking for test storage... 00:24:10.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:10.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.456 --rc genhtml_branch_coverage=1 00:24:10.456 --rc genhtml_function_coverage=1 00:24:10.456 --rc genhtml_legend=1 00:24:10.456 --rc geninfo_all_blocks=1 00:24:10.456 --rc geninfo_unexecuted_blocks=1 00:24:10.456 00:24:10.456 ' 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:10.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.456 --rc genhtml_branch_coverage=1 00:24:10.456 --rc genhtml_function_coverage=1 00:24:10.456 --rc genhtml_legend=1 00:24:10.456 --rc geninfo_all_blocks=1 00:24:10.456 --rc geninfo_unexecuted_blocks=1 00:24:10.456 00:24:10.456 ' 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:10.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.456 --rc genhtml_branch_coverage=1 00:24:10.456 --rc genhtml_function_coverage=1 00:24:10.456 --rc genhtml_legend=1 00:24:10.456 --rc geninfo_all_blocks=1 00:24:10.456 --rc geninfo_unexecuted_blocks=1 00:24:10.456 00:24:10.456 ' 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:10.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:10.456 --rc genhtml_branch_coverage=1 00:24:10.456 --rc genhtml_function_coverage=1 00:24:10.456 --rc genhtml_legend=1 00:24:10.456 --rc geninfo_all_blocks=1 00:24:10.456 --rc geninfo_unexecuted_blocks=1 00:24:10.456 00:24:10.456 ' 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.456 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:10.457 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:10.457 04:14:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:12.982 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:12.982 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.982 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:12.983 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:12.983 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:12.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:24:12.983 00:24:12.983 --- 10.0.0.2 ping statistics --- 00:24:12.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.983 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:12.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:24:12.983 00:24:12.983 --- 10.0.0.1 ping statistics --- 00:24:12.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.983 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=324797 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 324797 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 324797 ']' 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:12.983 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8f22a016cccb9161a1688189662d3f79 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.uc2 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8f22a016cccb9161a1688189662d3f79 0 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8f22a016cccb9161a1688189662d3f79 0 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8f22a016cccb9161a1688189662d3f79 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.uc2 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.uc2 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.uc2 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bf936da89c25c64cd015a9adfb62c44d53a0d75ba3a60ced26496f779f7c895a 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.GnL 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bf936da89c25c64cd015a9adfb62c44d53a0d75ba3a60ced26496f779f7c895a 3 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bf936da89c25c64cd015a9adfb62c44d53a0d75ba3a60ced26496f779f7c895a 3 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bf936da89c25c64cd015a9adfb62c44d53a0d75ba3a60ced26496f779f7c895a 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.GnL 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.GnL 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.GnL 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=90b27af51541d9c69354a040bba43c3fd6d885038a82df6d 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.L5W 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 90b27af51541d9c69354a040bba43c3fd6d885038a82df6d 0 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 90b27af51541d9c69354a040bba43c3fd6d885038a82df6d 0 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=90b27af51541d9c69354a040bba43c3fd6d885038a82df6d 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.L5W 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.L5W 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.L5W 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6aeb2e9d46b99f85b0a389c54702c033b4ba26b167c5483b 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.2SP 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6aeb2e9d46b99f85b0a389c54702c033b4ba26b167c5483b 2 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6aeb2e9d46b99f85b0a389c54702c033b4ba26b167c5483b 2 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6aeb2e9d46b99f85b0a389c54702c033b4ba26b167c5483b 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.2SP 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.2SP 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.2SP 00:24:13.243 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f4aefd29ff42b745e1ceacb62df50217 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.N33 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f4aefd29ff42b745e1ceacb62df50217 1 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f4aefd29ff42b745e1ceacb62df50217 1 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f4aefd29ff42b745e1ceacb62df50217 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.N33 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.N33 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.N33 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=15d7aa29a9af906bcec19a03fe18280c 00:24:13.244 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Tzj 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 15d7aa29a9af906bcec19a03fe18280c 1 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 15d7aa29a9af906bcec19a03fe18280c 1 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=15d7aa29a9af906bcec19a03fe18280c 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Tzj 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Tzj 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Tzj 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=51322eef90417accb2f147eb635fda4e1091a6dd1a1cd606 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.du9 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 51322eef90417accb2f147eb635fda4e1091a6dd1a1cd606 2 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 51322eef90417accb2f147eb635fda4e1091a6dd1a1cd606 2 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=51322eef90417accb2f147eb635fda4e1091a6dd1a1cd606 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.du9 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.du9 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.du9 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8b9293af70a2e0cd76bd932af7305dec 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.fut 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8b9293af70a2e0cd76bd932af7305dec 0 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8b9293af70a2e0cd76bd932af7305dec 0 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:13.502 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8b9293af70a2e0cd76bd932af7305dec 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.fut 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.fut 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.fut 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=18ecee300f590cff009ff7133cadebb3106cd80b6b51659a1f0c0bc501e50732 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.3Ii 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 18ecee300f590cff009ff7133cadebb3106cd80b6b51659a1f0c0bc501e50732 3 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 18ecee300f590cff009ff7133cadebb3106cd80b6b51659a1f0c0bc501e50732 3 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=18ecee300f590cff009ff7133cadebb3106cd80b6b51659a1f0c0bc501e50732 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:13.503 04:14:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:13.503 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.3Ii 00:24:13.503 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.3Ii 00:24:13.503 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.3Ii 00:24:13.503 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:13.503 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 324797 00:24:13.503 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 324797 ']' 00:24:13.503 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.503 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:13.503 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.503 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:13.503 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.uc2 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.GnL ]] 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.GnL 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.L5W 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.2SP ]] 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2SP 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.069 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.N33 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Tzj ]] 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Tzj 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.du9 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.fut ]] 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.fut 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.3Ii 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:14.070 04:14:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:15.004 Waiting for block devices as requested 00:24:15.004 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:24:15.004 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:15.261 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:15.261 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:15.519 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:15.519 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:15.519 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:15.519 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:15.777 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:15.777 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:24:15.777 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:24:15.777 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:24:16.035 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:24:16.035 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:24:16.035 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:24:16.035 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:24:16.293 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:24:16.551 04:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:16.551 04:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:16.551 04:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:16.551 04:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:16.551 04:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:16.551 04:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:16.551 04:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:16.551 04:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:16.551 04:14:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:16.551 No valid GPT data, bailing 00:24:16.551 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:16.551 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:16.551 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:16.551 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:16.551 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:24:16.551 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:16.551 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:16.551 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:16.551 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:16.551 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:24:16.551 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:24:16.551 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:24:16.551 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:16.551 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:24:16.551 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:24:16.551 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:24:16.551 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:16.551 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:24:16.808 00:24:16.808 Discovery Log Number of Records 2, Generation counter 2 00:24:16.808 =====Discovery Log Entry 0====== 00:24:16.808 trtype: tcp 00:24:16.808 adrfam: ipv4 00:24:16.808 subtype: current discovery subsystem 00:24:16.808 treq: not specified, sq flow control disable supported 00:24:16.808 portid: 1 00:24:16.808 trsvcid: 4420 00:24:16.808 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:16.808 traddr: 10.0.0.1 00:24:16.808 eflags: none 00:24:16.808 sectype: none 00:24:16.808 =====Discovery Log Entry 1====== 00:24:16.808 trtype: tcp 00:24:16.808 adrfam: ipv4 00:24:16.808 subtype: nvme subsystem 00:24:16.808 treq: not specified, sq flow control disable supported 00:24:16.808 portid: 1 00:24:16.808 trsvcid: 4420 00:24:16.808 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:16.808 traddr: 10.0.0.1 00:24:16.808 eflags: none 00:24:16.808 sectype: none 00:24:16.808 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:16.808 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:16.808 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:16.808 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:16.808 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.808 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:16.808 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:16.808 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:16.808 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:16.808 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:16.808 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:16.808 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:16.808 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:16.808 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]] 00:24:16.808 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:16.808 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:16.808 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:16.808 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.809 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.065 nvme0n1 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]] 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.065 nvme0n1 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.065 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.066 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.066 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.066 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.066 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]] 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.323 nvme0n1 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.323 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.580 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.580 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.580 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:17.580 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.580 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:17.580 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:17.580 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:17.580 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:17.580 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:17.580 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:17.580 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:17.580 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:17.580 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]] 00:24:17.580 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.581 04:14:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.581 nvme0n1 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]] 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.581 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.838 nvme0n1 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.838 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.839 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.839 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.839 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.839 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.839 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.839 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.839 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.839 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.839 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.839 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:17.839 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.839 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.096 nvme0n1 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:18.096 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:18.353 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:18.353 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]] 00:24:18.353 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:18.353 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.354 04:14:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.613 nvme0n1 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]] 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.613 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.872 nvme0n1 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]] 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.872 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.873 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.873 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.873 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.873 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.873 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.873 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.873 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.873 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.873 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.873 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.873 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.873 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:18.873 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.873 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.131 nvme0n1 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]] 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.131 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.390 nvme0n1 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.390 04:14:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.648 nvme0n1 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:19.648 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]] 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.213 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.471 nvme0n1 00:24:20.471 04:14:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.471 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.471 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.471 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.471 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.471 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.471 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.471 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.471 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.471 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]] 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.728 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.729 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.729 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.729 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.729 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:20.729 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.729 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.986 nvme0n1 00:24:20.986 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.986 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.986 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.986 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.986 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.986 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.986 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.986 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.986 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.986 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.986 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.986 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]] 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.987 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.245 nvme0n1 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]] 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.245 04:14:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.503 nvme0n1 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.503 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.760 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.760 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.760 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.760 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.760 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:21.760 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.760 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.017 nvme0n1 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:22.017 04:14:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]] 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.913 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.170 nvme0n1 00:24:24.170 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.170 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.170 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.170 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.170 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.170 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]] 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.428 04:14:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.992 nvme0n1 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]] 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.992 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.250 nvme0n1 00:24:25.250 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.250 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.250 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.250 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.250 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]] 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.508 04:14:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.074 nvme0n1 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.074 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.641 nvme0n1 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:26.641 04:14:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]] 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.641 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.572 nvme0n1 00:24:27.572 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.572 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.572 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.572 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.572 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.572 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.572 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.572 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]] 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.573 04:14:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.505 nvme0n1 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]] 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.505 04:14:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.070 nvme0n1 00:24:29.070 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.070 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.070 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.070 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.070 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]] 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:29.328 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:29.329 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:29.329 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.329 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:29.329 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.329 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.329 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.329 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.329 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.329 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.329 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.329 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.329 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.329 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.329 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.329 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.329 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.329 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.329 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:29.329 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.329 04:14:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.260 nvme0n1 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.260 04:14:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.191 nvme0n1 00:24:31.191 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.191 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.191 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.191 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.191 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.191 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.191 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.191 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]] 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.192 nvme0n1 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.192 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]] 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.450 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.451 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.451 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.451 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.451 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.451 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.451 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.451 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:31.451 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.451 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.451 nvme0n1 00:24:31.451 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.451 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.451 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.451 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.451 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.451 04:14:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]] 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.451 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.709 nvme0n1 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]] 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.709 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.968 nvme0n1 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.968 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.226 nvme0n1 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]] 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.226 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.484 nvme0n1 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]] 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.484 04:15:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.742 nvme0n1 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]] 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.742 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.000 nvme0n1 00:24:33.000 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.000 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.000 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.000 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.000 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.000 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.000 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.000 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.000 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.000 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.000 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.000 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.000 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:33.000 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.000 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]] 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.001 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.258 nvme0n1 00:24:33.258 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.258 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.258 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.258 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.258 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.259 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.516 nvme0n1 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]] 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.516 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.517 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.517 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.517 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.517 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.517 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.517 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.517 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.517 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.517 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.517 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:33.517 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.517 04:15:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.773 nvme0n1 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]] 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.773 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.029 nvme0n1 00:24:34.029 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.029 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.029 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.029 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.029 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.029 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]] 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.286 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.287 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.287 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.287 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.287 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.287 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.287 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.287 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.287 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.287 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.287 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:34.287 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.287 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.544 nvme0n1 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]] 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.544 04:15:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.802 nvme0n1 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:34.802 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.060 nvme0n1 00:24:35.060 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.060 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.060 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.060 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.060 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.060 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.060 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.060 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.060 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.060 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]] 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.319 04:15:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.578 nvme0n1 00:24:35.578 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.578 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:35.578 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.578 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.578 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]] 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:35.837 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.403 nvme0n1 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]] 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.403 04:15:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.009 nvme0n1 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]] 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.009 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.576 nvme0n1 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:37.576 04:15:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.143 nvme0n1 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]] 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.143 04:15:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.077 nvme0n1 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]] 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.077 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:39.078 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.078 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.078 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.078 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:39.078 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:39.078 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:39.078 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:39.078 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:39.078 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:39.078 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:39.078 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:39.078 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:39.078 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:39.078 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:39.078 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:39.078 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.078 04:15:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.011 nvme0n1 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]] 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:40.011 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.012 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:40.012 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:40.012 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:40.012 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:40.012 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.012 04:15:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.946 nvme0n1 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]] 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.946 04:15:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.879 nvme0n1 00:24:41.879 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.879 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.879 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.879 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.879 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.879 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.879 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.879 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.879 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.879 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.879 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.880 04:15:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.814 nvme0n1 00:24:42.814 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.814 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.814 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.814 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.814 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.814 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.814 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.814 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.814 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.814 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.814 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.814 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:42.814 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:42.814 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.814 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:42.814 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.814 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:42.814 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.814 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]] 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.815 nvme0n1 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]] 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.815 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.073 nvme0n1 00:24:43.073 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]] 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.074 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.332 nvme0n1 00:24:43.332 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.332 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.332 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.332 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.332 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.332 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.332 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.332 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.332 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.332 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.332 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.332 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.332 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:43.332 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.332 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:43.332 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]] 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.333 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.591 nvme0n1 00:24:43.591 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.591 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.591 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.591 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.591 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.591 04:15:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.591 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.850 nvme0n1 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]] 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.850 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.109 nvme0n1 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:44.109 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]] 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.110 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.368 nvme0n1 00:24:44.368 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.368 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.368 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.368 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.368 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.368 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.368 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.368 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.368 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]] 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.369 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.628 nvme0n1 00:24:44.628 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.628 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.628 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.628 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.628 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.628 04:15:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]] 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.628 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.887 nvme0n1 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.887 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.144 nvme0n1 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]] 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.144 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.402 nvme0n1 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]] 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.402 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.403 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.403 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.403 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.403 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.403 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:45.403 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.403 04:15:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.659 nvme0n1 00:24:45.659 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.659 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.659 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.659 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.659 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.659 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.659 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.659 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.659 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.659 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.659 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.659 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.659 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:45.659 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.659 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:45.660 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:45.660 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:45.660 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:45.660 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:45.660 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:45.660 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:45.660 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:45.660 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]] 00:24:45.660 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:45.660 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:45.660 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.660 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:45.660 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:45.660 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:45.660 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.660 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:45.660 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.660 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.660 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.917 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:45.917 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:45.917 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:45.917 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:45.917 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:45.917 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:45.917 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:45.917 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:45.917 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:45.917 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:45.917 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:45.917 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:45.917 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.917 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.174 nvme0n1 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]] 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.174 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.431 nvme0n1 00:24:46.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:46.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:46.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:46.431 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.432 04:15:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.689 nvme0n1 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]] 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.689 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.253 nvme0n1 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]] 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.253 04:15:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.819 nvme0n1 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]] 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.819 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.385 nvme0n1 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]] 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.385 04:15:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.950 nvme0n1 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:48.950 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:48.951 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:48.951 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:48.951 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.951 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.951 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:48.951 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.951 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:48.951 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:48.951 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:48.951 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:48.951 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.951 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.515 nvme0n1 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGYyMmEwMTZjY2NiOTE2MWExNjg4MTg5NjYyZDNmNzkc3D15: 00:24:49.515 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: ]] 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmY5MzZkYTg5YzI1YzY0Y2QwMTVhOWFkZmI2MmM0NGQ1M2EwZDc1YmEzYTYwY2VkMjY0OTZmNzc5ZjdjODk1Yfa0jwU=: 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.516 04:15:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.449 nvme0n1 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]] 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.449 04:15:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.383 nvme0n1 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]] 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.383 04:15:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.317 nvme0n1 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTEzMjJlZWY5MDQxN2FjY2IyZjE0N2ViNjM1ZmRhNGUxMDkxYTZkZDFhMWNkNjA2aBUZoA==: 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: ]] 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:OGI5MjkzYWY3MGEyZTBjZDc2YmQ5MzJhZjczMDVkZWPq+5/X: 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.317 04:15:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.251 nvme0n1 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MThlY2VlMzAwZjU5MGNmZjAwOWZmNzEzM2NhZGViYjMxMDZjZDgwYjZiNTE2NTlhMWYwYzBiYzUwMWU1MDczMmgmBQM=: 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.251 04:15:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.186 nvme0n1 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]] 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.186 request: 00:24:54.186 { 00:24:54.186 "name": "nvme0", 00:24:54.186 "trtype": "tcp", 00:24:54.186 "traddr": "10.0.0.1", 00:24:54.186 "adrfam": "ipv4", 00:24:54.186 "trsvcid": "4420", 00:24:54.186 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:54.186 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:54.186 "prchk_reftag": false, 00:24:54.186 "prchk_guard": false, 00:24:54.186 "hdgst": false, 00:24:54.186 "ddgst": false, 00:24:54.186 "allow_unrecognized_csi": false, 00:24:54.186 "method": "bdev_nvme_attach_controller", 00:24:54.186 "req_id": 1 00:24:54.186 } 00:24:54.186 Got JSON-RPC error response 00:24:54.186 response: 00:24:54.186 { 00:24:54.186 "code": -5, 00:24:54.186 "message": "Input/output error" 00:24:54.186 } 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:54.186 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.187 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.187 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.187 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.187 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.187 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.187 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.187 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.187 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.187 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.187 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:54.187 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:54.187 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:54.187 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:54.187 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:54.187 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:54.187 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:54.187 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:54.187 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.187 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.444 request: 00:24:54.444 { 00:24:54.444 "name": "nvme0", 00:24:54.444 "trtype": "tcp", 00:24:54.444 "traddr": "10.0.0.1", 00:24:54.444 "adrfam": "ipv4", 00:24:54.444 "trsvcid": "4420", 00:24:54.444 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:54.444 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:54.444 "prchk_reftag": false, 00:24:54.444 "prchk_guard": false, 00:24:54.444 "hdgst": false, 00:24:54.444 "ddgst": false, 00:24:54.444 "dhchap_key": "key2", 00:24:54.444 "allow_unrecognized_csi": false, 00:24:54.444 "method": "bdev_nvme_attach_controller", 00:24:54.444 "req_id": 1 00:24:54.444 } 00:24:54.444 Got JSON-RPC error response 00:24:54.444 response: 00:24:54.444 { 00:24:54.444 "code": -5, 00:24:54.444 "message": "Input/output error" 00:24:54.444 } 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.444 request: 00:24:54.444 { 00:24:54.444 "name": "nvme0", 00:24:54.444 "trtype": "tcp", 00:24:54.444 "traddr": "10.0.0.1", 00:24:54.444 "adrfam": "ipv4", 00:24:54.444 "trsvcid": "4420", 00:24:54.444 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:54.444 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:54.444 "prchk_reftag": false, 00:24:54.444 "prchk_guard": false, 00:24:54.444 "hdgst": false, 00:24:54.444 "ddgst": false, 00:24:54.444 "dhchap_key": "key1", 00:24:54.444 "dhchap_ctrlr_key": "ckey2", 00:24:54.444 "allow_unrecognized_csi": false, 00:24:54.444 "method": "bdev_nvme_attach_controller", 00:24:54.444 "req_id": 1 00:24:54.444 } 00:24:54.444 Got JSON-RPC error response 00:24:54.444 response: 00:24:54.444 { 00:24:54.444 "code": -5, 00:24:54.444 "message": "Input/output error" 00:24:54.444 } 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:54.444 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:54.445 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:54.445 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:54.445 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:24:54.445 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:54.445 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:54.445 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:54.445 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:54.445 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:54.445 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:54.445 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:54.445 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:54.445 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:54.445 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:54.445 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:54.445 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.445 04:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.703 nvme0n1 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]] 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.703 request: 00:24:54.703 { 00:24:54.703 "name": "nvme0", 00:24:54.703 "dhchap_key": "key1", 00:24:54.703 "dhchap_ctrlr_key": "ckey2", 00:24:54.703 "method": "bdev_nvme_set_keys", 00:24:54.703 "req_id": 1 00:24:54.703 } 00:24:54.703 Got JSON-RPC error response 00:24:54.703 response: 00:24:54.703 { 00:24:54.703 "code": -13, 00:24:54.703 "message": "Permission denied" 00:24:54.703 } 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:54.703 04:15:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTBiMjdhZjUxNTQxZDljNjkzNTRhMDQwYmJhNDNjM2ZkNmQ4ODUwMzhhODJkZjZkiVE78w==: 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: ]] 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NmFlYjJlOWQ0NmI5OWY4NWIwYTM4OWM1NDcwMmMwMzNiNGJhMjZiMTY3YzU0ODNi4zxBQg==: 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.076 nvme0n1 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjRhZWZkMjlmZjQyYjc0NWUxY2VhY2I2MmRmNTAyMTea9OiF: 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: ]] 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTVkN2FhMjlhOWFmOTA2YmNlYzE5YTAzZmUxODI4MGOIoBHo: 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:56.076 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:56.077 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:56.077 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:56.077 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:56.077 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:56.077 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:56.077 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:56.077 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.077 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.077 request: 00:24:56.077 { 00:24:56.077 "name": "nvme0", 00:24:56.077 "dhchap_key": "key2", 00:24:56.077 "dhchap_ctrlr_key": "ckey1", 00:24:56.077 "method": "bdev_nvme_set_keys", 00:24:56.077 "req_id": 1 00:24:56.077 } 00:24:56.077 Got JSON-RPC error response 00:24:56.077 response: 00:24:56.077 { 00:24:56.077 "code": -13, 00:24:56.077 "message": "Permission denied" 00:24:56.077 } 00:24:56.077 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:56.077 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:56.077 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:56.077 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:56.077 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:56.077 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:56.077 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.077 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.077 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.077 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.077 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:24:56.077 04:15:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:24:57.009 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.009 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:57.009 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.009 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.009 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:57.266 rmmod nvme_tcp 00:24:57.266 rmmod nvme_fabrics 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 324797 ']' 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 324797 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 324797 ']' 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 324797 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 324797 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 324797' 00:24:57.266 killing process with pid 324797 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 324797 00:24:57.266 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 324797 00:24:57.523 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:57.523 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:57.523 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:57.523 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:24:57.523 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:24:57.523 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:57.523 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:57.523 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:57.523 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:57.523 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:57.523 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:57.523 04:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:59.425 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:59.425 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:59.425 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:59.425 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:59.425 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:59.425 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:24:59.425 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:59.425 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:59.425 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:59.425 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:59.425 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:59.425 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:59.425 04:15:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:00.802 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:00.802 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:00.802 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:00.802 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:00.802 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:00.802 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:00.802 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:00.802 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:00.802 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:25:00.802 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:25:00.802 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:25:00.802 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:25:00.802 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:25:00.802 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:25:00.802 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:25:00.802 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:25:01.736 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:25:01.994 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.uc2 /tmp/spdk.key-null.L5W /tmp/spdk.key-sha256.N33 /tmp/spdk.key-sha384.du9 /tmp/spdk.key-sha512.3Ii /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:01.994 04:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:02.934 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:02.934 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:02.934 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:02.934 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:02.934 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:02.934 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:02.934 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:02.934 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:02.934 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:02.934 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:02.934 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:02.934 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:02.934 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:02.934 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:03.193 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:03.193 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:03.193 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:03.193 00:25:03.193 real 0m52.860s 00:25:03.193 user 0m50.363s 00:25:03.193 sys 0m6.168s 00:25:03.193 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:03.193 04:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.193 ************************************ 00:25:03.193 END TEST nvmf_auth_host 00:25:03.193 ************************************ 00:25:03.193 04:15:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:25:03.193 04:15:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:03.193 04:15:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:03.193 04:15:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:03.193 04:15:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.193 ************************************ 00:25:03.193 START TEST nvmf_digest 00:25:03.193 ************************************ 00:25:03.193 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:03.193 * Looking for test storage... 00:25:03.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:03.193 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:03.193 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:25:03.193 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:03.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.452 --rc genhtml_branch_coverage=1 00:25:03.452 --rc genhtml_function_coverage=1 00:25:03.452 --rc genhtml_legend=1 00:25:03.452 --rc geninfo_all_blocks=1 00:25:03.452 --rc geninfo_unexecuted_blocks=1 00:25:03.452 00:25:03.452 ' 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:03.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.452 --rc genhtml_branch_coverage=1 00:25:03.452 --rc genhtml_function_coverage=1 00:25:03.452 --rc genhtml_legend=1 00:25:03.452 --rc geninfo_all_blocks=1 00:25:03.452 --rc geninfo_unexecuted_blocks=1 00:25:03.452 00:25:03.452 ' 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:03.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.452 --rc genhtml_branch_coverage=1 00:25:03.452 --rc genhtml_function_coverage=1 00:25:03.452 --rc genhtml_legend=1 00:25:03.452 --rc geninfo_all_blocks=1 00:25:03.452 --rc geninfo_unexecuted_blocks=1 00:25:03.452 00:25:03.452 ' 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:03.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:03.452 --rc genhtml_branch_coverage=1 00:25:03.452 --rc genhtml_function_coverage=1 00:25:03.452 --rc genhtml_legend=1 00:25:03.452 --rc geninfo_all_blocks=1 00:25:03.452 --rc geninfo_unexecuted_blocks=1 00:25:03.452 00:25:03.452 ' 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:25:03.452 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:03.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:25:03.453 04:15:31 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.986 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:05.987 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:05.987 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:05.987 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:05.987 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:05.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:25:05.987 00:25:05.987 --- 10.0.0.2 ping statistics --- 00:25:05.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.987 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:25:05.987 00:25:05.987 --- 10.0.0.1 ping statistics --- 00:25:05.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.987 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:05.987 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:05.987 ************************************ 00:25:05.987 START TEST nvmf_digest_clean 00:25:05.988 ************************************ 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=335291 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 335291 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 335291 ']' 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:05.988 [2024-12-09 04:15:34.287385] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:25:05.988 [2024-12-09 04:15:34.287461] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.988 [2024-12-09 04:15:34.357507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:05.988 [2024-12-09 04:15:34.409577] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.988 [2024-12-09 04:15:34.409640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.988 [2024-12-09 04:15:34.409668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:05.988 [2024-12-09 04:15:34.409688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:05.988 [2024-12-09 04:15:34.409697] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.988 [2024-12-09 04:15:34.410268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.988 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:06.246 null0 00:25:06.246 [2024-12-09 04:15:34.649500] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.246 [2024-12-09 04:15:34.673784] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.246 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.246 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:06.246 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:06.246 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:06.246 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:06.246 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:06.246 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:06.246 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:06.246 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=335311 00:25:06.246 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 335311 /var/tmp/bperf.sock 00:25:06.246 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 335311 ']' 00:25:06.246 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:06.246 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.246 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:06.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:06.246 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.246 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:06.246 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:06.246 [2024-12-09 04:15:34.725480] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:25:06.246 [2024-12-09 04:15:34.725557] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335311 ] 00:25:06.246 [2024-12-09 04:15:34.790419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.503 [2024-12-09 04:15:34.852227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.503 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.503 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:06.503 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:06.503 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:06.503 04:15:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:07.068 04:15:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:07.068 04:15:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:07.327 nvme0n1 00:25:07.327 04:15:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:07.327 04:15:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:07.327 Running I/O for 2 seconds... 00:25:09.630 18840.00 IOPS, 73.59 MiB/s [2024-12-09T03:15:38.206Z] 18929.50 IOPS, 73.94 MiB/s 00:25:09.630 Latency(us) 00:25:09.630 [2024-12-09T03:15:38.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.630 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:09.630 nvme0n1 : 2.01 18934.54 73.96 0.00 0.00 6751.66 3179.71 14078.10 00:25:09.630 [2024-12-09T03:15:38.206Z] =================================================================================================================== 00:25:09.630 [2024-12-09T03:15:38.206Z] Total : 18934.54 73.96 0.00 0.00 6751.66 3179.71 14078.10 00:25:09.630 { 00:25:09.630 "results": [ 00:25:09.630 { 00:25:09.630 "job": "nvme0n1", 00:25:09.630 "core_mask": "0x2", 00:25:09.630 "workload": "randread", 00:25:09.630 "status": "finished", 00:25:09.630 "queue_depth": 128, 00:25:09.630 "io_size": 4096, 00:25:09.630 "runtime": 2.006439, 00:25:09.630 "iops": 18934.54024767262, 00:25:09.630 "mibps": 73.96304784247117, 00:25:09.630 "io_failed": 0, 00:25:09.630 "io_timeout": 0, 00:25:09.630 "avg_latency_us": 6751.655372568747, 00:25:09.630 "min_latency_us": 3179.7096296296295, 00:25:09.630 "max_latency_us": 14078.103703703704 00:25:09.630 } 00:25:09.630 ], 00:25:09.630 "core_count": 1 00:25:09.630 } 00:25:09.630 04:15:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:09.630 04:15:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:09.630 04:15:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:09.630 04:15:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:09.630 04:15:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:09.630 | select(.opcode=="crc32c") 00:25:09.630 | "\(.module_name) \(.executed)"' 00:25:09.630 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:09.630 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:09.630 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:09.630 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:09.630 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 335311 00:25:09.630 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 335311 ']' 00:25:09.630 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 335311 00:25:09.630 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:09.630 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:09.630 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 335311 00:25:09.888 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:09.888 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:09.888 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 335311' 00:25:09.888 killing process with pid 335311 00:25:09.888 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 335311 00:25:09.888 Received shutdown signal, test time was about 2.000000 seconds 00:25:09.888 00:25:09.888 Latency(us) 00:25:09.888 [2024-12-09T03:15:38.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.888 [2024-12-09T03:15:38.464Z] =================================================================================================================== 00:25:09.888 [2024-12-09T03:15:38.464Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:09.888 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 335311 00:25:09.888 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:09.888 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:09.888 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:09.888 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:25:09.888 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:09.888 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:09.888 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:09.888 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=335836 00:25:09.888 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 335836 /var/tmp/bperf.sock 00:25:09.888 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:09.888 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 335836 ']' 00:25:09.888 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:09.888 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:09.888 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:09.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:09.888 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:09.888 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:10.146 [2024-12-09 04:15:38.505599] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:25:10.146 [2024-12-09 04:15:38.505685] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid335836 ] 00:25:10.146 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:10.146 Zero copy mechanism will not be used. 00:25:10.146 [2024-12-09 04:15:38.572007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.147 [2024-12-09 04:15:38.628776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.405 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:10.405 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:10.405 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:10.405 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:10.405 04:15:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:10.664 04:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:10.664 04:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:10.922 nvme0n1 00:25:10.922 04:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:10.922 04:15:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:11.180 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:11.180 Zero copy mechanism will not be used. 00:25:11.180 Running I/O for 2 seconds... 00:25:13.054 5669.00 IOPS, 708.62 MiB/s [2024-12-09T03:15:41.630Z] 5821.50 IOPS, 727.69 MiB/s 00:25:13.054 Latency(us) 00:25:13.054 [2024-12-09T03:15:41.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.054 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:13.054 nvme0n1 : 2.00 5818.38 727.30 0.00 0.00 2745.79 703.91 7039.05 00:25:13.054 [2024-12-09T03:15:41.630Z] =================================================================================================================== 00:25:13.054 [2024-12-09T03:15:41.630Z] Total : 5818.38 727.30 0.00 0.00 2745.79 703.91 7039.05 00:25:13.054 { 00:25:13.054 "results": [ 00:25:13.054 { 00:25:13.054 "job": "nvme0n1", 00:25:13.054 "core_mask": "0x2", 00:25:13.054 "workload": "randread", 00:25:13.054 "status": "finished", 00:25:13.054 "queue_depth": 16, 00:25:13.054 "io_size": 131072, 00:25:13.054 "runtime": 2.003821, 00:25:13.054 "iops": 5818.383977411156, 00:25:13.054 "mibps": 727.2979971763945, 00:25:13.054 "io_failed": 0, 00:25:13.054 "io_timeout": 0, 00:25:13.054 "avg_latency_us": 2745.78694189515, 00:25:13.054 "min_latency_us": 703.9051851851851, 00:25:13.054 "max_latency_us": 7039.051851851852 00:25:13.054 } 00:25:13.054 ], 00:25:13.054 "core_count": 1 00:25:13.055 } 00:25:13.055 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:13.055 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:13.055 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:13.055 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:13.055 | select(.opcode=="crc32c") 00:25:13.055 | "\(.module_name) \(.executed)"' 00:25:13.055 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:13.313 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:13.313 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:13.313 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:13.313 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:13.313 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 335836 00:25:13.313 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 335836 ']' 00:25:13.313 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 335836 00:25:13.313 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:13.313 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:13.313 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 335836 00:25:13.571 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:13.571 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:13.571 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 335836' 00:25:13.571 killing process with pid 335836 00:25:13.571 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 335836 00:25:13.571 Received shutdown signal, test time was about 2.000000 seconds 00:25:13.571 00:25:13.571 Latency(us) 00:25:13.571 [2024-12-09T03:15:42.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.571 [2024-12-09T03:15:42.147Z] =================================================================================================================== 00:25:13.571 [2024-12-09T03:15:42.147Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:13.571 04:15:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 335836 00:25:13.571 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:13.571 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:13.571 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:13.571 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:13.571 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:25:13.571 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:25:13.571 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:13.571 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=336253 00:25:13.571 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:13.571 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 336253 /var/tmp/bperf.sock 00:25:13.571 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 336253 ']' 00:25:13.571 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:13.571 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.571 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:13.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:13.571 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.571 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:13.830 [2024-12-09 04:15:42.180898] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:25:13.830 [2024-12-09 04:15:42.180971] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid336253 ] 00:25:13.830 [2024-12-09 04:15:42.245924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.830 [2024-12-09 04:15:42.300831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.830 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.830 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:13.830 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:13.830 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:13.830 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:14.397 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:14.397 04:15:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:14.655 nvme0n1 00:25:14.655 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:14.655 04:15:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:14.913 Running I/O for 2 seconds... 00:25:16.779 19274.00 IOPS, 75.29 MiB/s [2024-12-09T03:15:45.355Z] 19041.00 IOPS, 74.38 MiB/s 00:25:16.779 Latency(us) 00:25:16.779 [2024-12-09T03:15:45.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:16.779 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:16.779 nvme0n1 : 2.01 19043.58 74.39 0.00 0.00 6705.95 2754.94 8980.86 00:25:16.779 [2024-12-09T03:15:45.355Z] =================================================================================================================== 00:25:16.779 [2024-12-09T03:15:45.355Z] Total : 19043.58 74.39 0.00 0.00 6705.95 2754.94 8980.86 00:25:16.779 { 00:25:16.779 "results": [ 00:25:16.779 { 00:25:16.779 "job": "nvme0n1", 00:25:16.779 "core_mask": "0x2", 00:25:16.779 "workload": "randwrite", 00:25:16.779 "status": "finished", 00:25:16.779 "queue_depth": 128, 00:25:16.779 "io_size": 4096, 00:25:16.779 "runtime": 2.008131, 00:25:16.779 "iops": 19043.578332290075, 00:25:16.779 "mibps": 74.3889778605081, 00:25:16.779 "io_failed": 0, 00:25:16.779 "io_timeout": 0, 00:25:16.779 "avg_latency_us": 6705.9450784962055, 00:25:16.779 "min_latency_us": 2754.9392592592594, 00:25:16.779 "max_latency_us": 8980.85925925926 00:25:16.779 } 00:25:16.779 ], 00:25:16.779 "core_count": 1 00:25:16.779 } 00:25:16.779 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:16.779 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:16.779 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:16.779 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:16.779 | select(.opcode=="crc32c") 00:25:16.779 | "\(.module_name) \(.executed)"' 00:25:16.779 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:17.036 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:17.036 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:17.036 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:17.036 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:17.036 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 336253 00:25:17.036 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 336253 ']' 00:25:17.036 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 336253 00:25:17.036 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:17.036 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:17.036 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 336253 00:25:17.293 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:17.293 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:17.293 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 336253' 00:25:17.293 killing process with pid 336253 00:25:17.293 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 336253 00:25:17.293 Received shutdown signal, test time was about 2.000000 seconds 00:25:17.293 00:25:17.293 Latency(us) 00:25:17.293 [2024-12-09T03:15:45.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.293 [2024-12-09T03:15:45.869Z] =================================================================================================================== 00:25:17.293 [2024-12-09T03:15:45.869Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:17.293 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 336253 00:25:17.293 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:17.293 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:17.293 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:17.293 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:25:17.293 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:25:17.293 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:25:17.293 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:25:17.293 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=336656 00:25:17.293 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:17.293 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 336656 /var/tmp/bperf.sock 00:25:17.293 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 336656 ']' 00:25:17.293 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:17.293 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:17.293 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:17.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:17.293 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:17.293 04:15:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:17.552 [2024-12-09 04:15:45.909919] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:25:17.552 [2024-12-09 04:15:45.909995] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid336656 ] 00:25:17.552 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:17.552 Zero copy mechanism will not be used. 00:25:17.552 [2024-12-09 04:15:45.975826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.552 [2024-12-09 04:15:46.033457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.810 04:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:17.810 04:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:25:17.810 04:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:25:17.810 04:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:17.810 04:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:18.068 04:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:18.068 04:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:18.325 nvme0n1 00:25:18.325 04:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:18.325 04:15:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:18.583 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:18.583 Zero copy mechanism will not be used. 00:25:18.583 Running I/O for 2 seconds... 00:25:20.444 6235.00 IOPS, 779.38 MiB/s [2024-12-09T03:15:49.020Z] 6392.00 IOPS, 799.00 MiB/s 00:25:20.444 Latency(us) 00:25:20.444 [2024-12-09T03:15:49.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.444 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:20.444 nvme0n1 : 2.00 6390.06 798.76 0.00 0.00 2496.29 1808.31 8980.86 00:25:20.444 [2024-12-09T03:15:49.020Z] =================================================================================================================== 00:25:20.444 [2024-12-09T03:15:49.020Z] Total : 6390.06 798.76 0.00 0.00 2496.29 1808.31 8980.86 00:25:20.444 { 00:25:20.444 "results": [ 00:25:20.444 { 00:25:20.444 "job": "nvme0n1", 00:25:20.444 "core_mask": "0x2", 00:25:20.444 "workload": "randwrite", 00:25:20.444 "status": "finished", 00:25:20.444 "queue_depth": 16, 00:25:20.444 "io_size": 131072, 00:25:20.444 "runtime": 2.003892, 00:25:20.444 "iops": 6390.064933639138, 00:25:20.444 "mibps": 798.7581167048922, 00:25:20.444 "io_failed": 0, 00:25:20.444 "io_timeout": 0, 00:25:20.444 "avg_latency_us": 2496.2911381260215, 00:25:20.444 "min_latency_us": 1808.3081481481481, 00:25:20.444 "max_latency_us": 8980.85925925926 00:25:20.444 } 00:25:20.444 ], 00:25:20.444 "core_count": 1 00:25:20.444 } 00:25:20.444 04:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:20.444 04:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:25:20.444 04:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:20.444 04:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:20.444 04:15:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:20.444 | select(.opcode=="crc32c") 00:25:20.444 | "\(.module_name) \(.executed)"' 00:25:20.701 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:25:20.701 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:25:20.701 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:20.701 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:20.701 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 336656 00:25:20.701 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 336656 ']' 00:25:20.701 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 336656 00:25:20.701 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:20.701 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:20.701 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 336656 00:25:20.959 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:20.959 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:20.959 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 336656' 00:25:20.959 killing process with pid 336656 00:25:20.959 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 336656 00:25:20.959 Received shutdown signal, test time was about 2.000000 seconds 00:25:20.959 00:25:20.959 Latency(us) 00:25:20.959 [2024-12-09T03:15:49.535Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.959 [2024-12-09T03:15:49.535Z] =================================================================================================================== 00:25:20.959 [2024-12-09T03:15:49.535Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:20.959 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 336656 00:25:20.959 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 335291 00:25:20.959 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 335291 ']' 00:25:20.959 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 335291 00:25:20.959 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:25:20.959 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:20.959 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 335291 00:25:21.217 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:21.217 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:21.217 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 335291' 00:25:21.217 killing process with pid 335291 00:25:21.217 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 335291 00:25:21.217 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 335291 00:25:21.217 00:25:21.217 real 0m15.521s 00:25:21.217 user 0m31.024s 00:25:21.217 sys 0m4.362s 00:25:21.217 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:21.217 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:25:21.217 ************************************ 00:25:21.217 END TEST nvmf_digest_clean 00:25:21.217 ************************************ 00:25:21.217 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:21.217 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:21.217 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:21.217 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:21.475 ************************************ 00:25:21.475 START TEST nvmf_digest_error 00:25:21.475 ************************************ 00:25:21.475 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:25:21.475 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:21.475 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:21.475 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:21.475 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:21.475 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=337208 00:25:21.475 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:21.475 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 337208 00:25:21.475 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 337208 ']' 00:25:21.475 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.475 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:21.475 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.475 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:21.475 04:15:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:21.475 [2024-12-09 04:15:49.868661] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:25:21.475 [2024-12-09 04:15:49.868769] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.475 [2024-12-09 04:15:49.941679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.475 [2024-12-09 04:15:49.999707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.475 [2024-12-09 04:15:49.999763] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.475 [2024-12-09 04:15:49.999794] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.475 [2024-12-09 04:15:49.999806] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.475 [2024-12-09 04:15:49.999816] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.475 [2024-12-09 04:15:50.000460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:21.739 [2024-12-09 04:15:50.137312] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:21.739 null0 00:25:21.739 [2024-12-09 04:15:50.258190] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.739 [2024-12-09 04:15:50.282443] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=337238 00:25:21.739 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 337238 /var/tmp/bperf.sock 00:25:21.740 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:21.740 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 337238 ']' 00:25:21.740 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:21.740 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:21.740 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:21.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:21.740 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:21.740 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:21.997 [2024-12-09 04:15:50.330649] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:25:21.997 [2024-12-09 04:15:50.330728] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid337238 ] 00:25:21.997 [2024-12-09 04:15:50.400211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.997 [2024-12-09 04:15:50.458342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.254 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:22.254 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:22.254 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:22.254 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:22.512 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:22.512 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.512 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:22.512 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.512 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:22.512 04:15:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:23.078 nvme0n1 00:25:23.078 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:23.078 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.078 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:23.078 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.078 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:23.078 04:15:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:23.078 Running I/O for 2 seconds... 00:25:23.078 [2024-12-09 04:15:51.581886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.078 [2024-12-09 04:15:51.581935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.078 [2024-12-09 04:15:51.581957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.078 [2024-12-09 04:15:51.593571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.078 [2024-12-09 04:15:51.593617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.078 [2024-12-09 04:15:51.593635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.078 [2024-12-09 04:15:51.610076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.078 [2024-12-09 04:15:51.610106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.078 [2024-12-09 04:15:51.610137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.078 [2024-12-09 04:15:51.625580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.078 [2024-12-09 04:15:51.625609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.078 [2024-12-09 04:15:51.625625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.078 [2024-12-09 04:15:51.638477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.078 [2024-12-09 04:15:51.638510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.078 [2024-12-09 04:15:51.638528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.078 [2024-12-09 04:15:51.650017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.078 [2024-12-09 04:15:51.650047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.078 [2024-12-09 04:15:51.650064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.337 [2024-12-09 04:15:51.663579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.337 [2024-12-09 04:15:51.663607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.337 [2024-12-09 04:15:51.663623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.337 [2024-12-09 04:15:51.676619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.337 [2024-12-09 04:15:51.676651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.337 [2024-12-09 04:15:51.676668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.337 [2024-12-09 04:15:51.688576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.337 [2024-12-09 04:15:51.688625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.337 [2024-12-09 04:15:51.688643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.337 [2024-12-09 04:15:51.701497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.337 [2024-12-09 04:15:51.701527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.337 [2024-12-09 04:15:51.701550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.337 [2024-12-09 04:15:51.713729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.337 [2024-12-09 04:15:51.713759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.337 [2024-12-09 04:15:51.713775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.337 [2024-12-09 04:15:51.727928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.337 [2024-12-09 04:15:51.727972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.337 [2024-12-09 04:15:51.727989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.337 [2024-12-09 04:15:51.739088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.337 [2024-12-09 04:15:51.739121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.337 [2024-12-09 04:15:51.739153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.337 [2024-12-09 04:15:51.753821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.337 [2024-12-09 04:15:51.753851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.337 [2024-12-09 04:15:51.753884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.337 [2024-12-09 04:15:51.769629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.337 [2024-12-09 04:15:51.769670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.337 [2024-12-09 04:15:51.769688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.337 [2024-12-09 04:15:51.781242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.337 [2024-12-09 04:15:51.781295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.337 [2024-12-09 04:15:51.781313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.337 [2024-12-09 04:15:51.794477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.337 [2024-12-09 04:15:51.794523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.338 [2024-12-09 04:15:51.794541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.338 [2024-12-09 04:15:51.807424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.338 [2024-12-09 04:15:51.807456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.338 [2024-12-09 04:15:51.807473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.338 [2024-12-09 04:15:51.818947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.338 [2024-12-09 04:15:51.818975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:15287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.338 [2024-12-09 04:15:51.819004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.338 [2024-12-09 04:15:51.834705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.338 [2024-12-09 04:15:51.834748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.338 [2024-12-09 04:15:51.834764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.338 [2024-12-09 04:15:51.849728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.338 [2024-12-09 04:15:51.849758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.338 [2024-12-09 04:15:51.849789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.338 [2024-12-09 04:15:51.859965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.338 [2024-12-09 04:15:51.859992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.338 [2024-12-09 04:15:51.860008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.338 [2024-12-09 04:15:51.876047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.338 [2024-12-09 04:15:51.876076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.338 [2024-12-09 04:15:51.876107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.338 [2024-12-09 04:15:51.888536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.338 [2024-12-09 04:15:51.888568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.338 [2024-12-09 04:15:51.888586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.338 [2024-12-09 04:15:51.899376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.338 [2024-12-09 04:15:51.899419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:4748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.338 [2024-12-09 04:15:51.899437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.596 [2024-12-09 04:15:51.920317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.596 [2024-12-09 04:15:51.920361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.596 [2024-12-09 04:15:51.920377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.596 [2024-12-09 04:15:51.934064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.596 [2024-12-09 04:15:51.934095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.596 [2024-12-09 04:15:51.934112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.596 [2024-12-09 04:15:51.948234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.596 [2024-12-09 04:15:51.948266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.596 [2024-12-09 04:15:51.948292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.596 [2024-12-09 04:15:51.959334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.596 [2024-12-09 04:15:51.959362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.596 [2024-12-09 04:15:51.959393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.596 [2024-12-09 04:15:51.973967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.596 [2024-12-09 04:15:51.973994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.596 [2024-12-09 04:15:51.974031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.596 [2024-12-09 04:15:51.986576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.596 [2024-12-09 04:15:51.986622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.596 [2024-12-09 04:15:51.986639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.596 [2024-12-09 04:15:51.997490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.596 [2024-12-09 04:15:51.997518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.596 [2024-12-09 04:15:51.997549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.596 [2024-12-09 04:15:52.012200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.596 [2024-12-09 04:15:52.012245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.596 [2024-12-09 04:15:52.012263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.596 [2024-12-09 04:15:52.028596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.596 [2024-12-09 04:15:52.028643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.596 [2024-12-09 04:15:52.028660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.596 [2024-12-09 04:15:52.043008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.596 [2024-12-09 04:15:52.043040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:7811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.596 [2024-12-09 04:15:52.043057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.596 [2024-12-09 04:15:52.054732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.596 [2024-12-09 04:15:52.054759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.596 [2024-12-09 04:15:52.054790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.596 [2024-12-09 04:15:52.070032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.596 [2024-12-09 04:15:52.070061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.596 [2024-12-09 04:15:52.070093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.596 [2024-12-09 04:15:52.083652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.596 [2024-12-09 04:15:52.083682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.596 [2024-12-09 04:15:52.083713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.596 [2024-12-09 04:15:52.099132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.596 [2024-12-09 04:15:52.099166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.596 [2024-12-09 04:15:52.099197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.596 [2024-12-09 04:15:52.113691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.596 [2024-12-09 04:15:52.113719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.597 [2024-12-09 04:15:52.113749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.597 [2024-12-09 04:15:52.126278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.597 [2024-12-09 04:15:52.126325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.597 [2024-12-09 04:15:52.126340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.597 [2024-12-09 04:15:52.139301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.597 [2024-12-09 04:15:52.139333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.597 [2024-12-09 04:15:52.139350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.597 [2024-12-09 04:15:52.151052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.597 [2024-12-09 04:15:52.151096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.597 [2024-12-09 04:15:52.151112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.597 [2024-12-09 04:15:52.164027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.597 [2024-12-09 04:15:52.164054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.597 [2024-12-09 04:15:52.164085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.855 [2024-12-09 04:15:52.180646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.855 [2024-12-09 04:15:52.180675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.855 [2024-12-09 04:15:52.180690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.855 [2024-12-09 04:15:52.194233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.855 [2024-12-09 04:15:52.194265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.855 [2024-12-09 04:15:52.194291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.855 [2024-12-09 04:15:52.205412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.855 [2024-12-09 04:15:52.205442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.855 [2024-12-09 04:15:52.205458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.855 [2024-12-09 04:15:52.220206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.855 [2024-12-09 04:15:52.220236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.855 [2024-12-09 04:15:52.220268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.855 [2024-12-09 04:15:52.232393] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.855 [2024-12-09 04:15:52.232423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.855 [2024-12-09 04:15:52.232455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.855 [2024-12-09 04:15:52.245479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.855 [2024-12-09 04:15:52.245508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:11730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.855 [2024-12-09 04:15:52.245540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.855 [2024-12-09 04:15:52.258819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.855 [2024-12-09 04:15:52.258847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.855 [2024-12-09 04:15:52.258863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.855 [2024-12-09 04:15:52.271295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.855 [2024-12-09 04:15:52.271326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.855 [2024-12-09 04:15:52.271343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.855 [2024-12-09 04:15:52.285747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.855 [2024-12-09 04:15:52.285775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.855 [2024-12-09 04:15:52.285806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.855 [2024-12-09 04:15:52.299263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.855 [2024-12-09 04:15:52.299451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.855 [2024-12-09 04:15:52.299470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.855 [2024-12-09 04:15:52.310820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.855 [2024-12-09 04:15:52.310848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.855 [2024-12-09 04:15:52.310880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.855 [2024-12-09 04:15:52.323534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.855 [2024-12-09 04:15:52.323577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.855 [2024-12-09 04:15:52.323598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.855 [2024-12-09 04:15:52.337980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.855 [2024-12-09 04:15:52.338023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.855 [2024-12-09 04:15:52.338039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.855 [2024-12-09 04:15:52.352133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.855 [2024-12-09 04:15:52.352162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.855 [2024-12-09 04:15:52.352193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.855 [2024-12-09 04:15:52.362843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.855 [2024-12-09 04:15:52.362870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.855 [2024-12-09 04:15:52.362900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.855 [2024-12-09 04:15:52.379057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.855 [2024-12-09 04:15:52.379089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.855 [2024-12-09 04:15:52.379107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.855 [2024-12-09 04:15:52.394895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.855 [2024-12-09 04:15:52.394923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.856 [2024-12-09 04:15:52.394953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.856 [2024-12-09 04:15:52.410866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.856 [2024-12-09 04:15:52.410897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.856 [2024-12-09 04:15:52.410915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:23.856 [2024-12-09 04:15:52.423746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:23.856 [2024-12-09 04:15:52.423777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:23.856 [2024-12-09 04:15:52.423794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.114 [2024-12-09 04:15:52.435704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.114 [2024-12-09 04:15:52.435732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.114 [2024-12-09 04:15:52.435763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.114 [2024-12-09 04:15:52.451344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.114 [2024-12-09 04:15:52.451379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.114 [2024-12-09 04:15:52.451411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.114 [2024-12-09 04:15:52.465201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.114 [2024-12-09 04:15:52.465229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.114 [2024-12-09 04:15:52.465261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.114 [2024-12-09 04:15:52.478601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.114 [2024-12-09 04:15:52.478633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.114 [2024-12-09 04:15:52.478651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.115 [2024-12-09 04:15:52.494522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.115 [2024-12-09 04:15:52.494555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.115 [2024-12-09 04:15:52.494572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.115 [2024-12-09 04:15:52.509741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.115 [2024-12-09 04:15:52.509785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.115 [2024-12-09 04:15:52.509802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.115 [2024-12-09 04:15:52.521602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.115 [2024-12-09 04:15:52.521631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.115 [2024-12-09 04:15:52.521664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.115 [2024-12-09 04:15:52.535946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.115 [2024-12-09 04:15:52.535975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.115 [2024-12-09 04:15:52.536005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.115 [2024-12-09 04:15:52.551635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.115 [2024-12-09 04:15:52.551666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.115 [2024-12-09 04:15:52.551683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.115 [2024-12-09 04:15:52.561920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.115 [2024-12-09 04:15:52.561948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.115 [2024-12-09 04:15:52.561979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.115 18619.00 IOPS, 72.73 MiB/s [2024-12-09T03:15:52.691Z] [2024-12-09 04:15:52.576153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.115 [2024-12-09 04:15:52.576182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:10444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.115 [2024-12-09 04:15:52.576214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.115 [2024-12-09 04:15:52.592144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.115 [2024-12-09 04:15:52.592176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.115 [2024-12-09 04:15:52.592193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.115 [2024-12-09 04:15:52.605546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.115 [2024-12-09 04:15:52.605576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.115 [2024-12-09 04:15:52.605595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.115 [2024-12-09 04:15:52.619439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.115 [2024-12-09 04:15:52.619470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.115 [2024-12-09 04:15:52.619488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.115 [2024-12-09 04:15:52.631016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.115 [2024-12-09 04:15:52.631046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.115 [2024-12-09 04:15:52.631062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.115 [2024-12-09 04:15:52.643704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.115 [2024-12-09 04:15:52.643735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.115 [2024-12-09 04:15:52.643751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.115 [2024-12-09 04:15:52.658205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.115 [2024-12-09 04:15:52.658235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.115 [2024-12-09 04:15:52.658252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.115 [2024-12-09 04:15:52.672629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.115 [2024-12-09 04:15:52.672660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.115 [2024-12-09 04:15:52.672678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.115 [2024-12-09 04:15:52.683961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.115 [2024-12-09 04:15:52.684001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.115 [2024-12-09 04:15:52.684019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.374 [2024-12-09 04:15:52.699043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.374 [2024-12-09 04:15:52.699077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.374 [2024-12-09 04:15:52.699095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.374 [2024-12-09 04:15:52.713707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.374 [2024-12-09 04:15:52.713738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.374 [2024-12-09 04:15:52.713756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.374 [2024-12-09 04:15:52.725693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.374 [2024-12-09 04:15:52.725725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.374 [2024-12-09 04:15:52.725757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.374 [2024-12-09 04:15:52.740891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.374 [2024-12-09 04:15:52.740921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.374 [2024-12-09 04:15:52.740937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.374 [2024-12-09 04:15:52.754684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.374 [2024-12-09 04:15:52.754715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.374 [2024-12-09 04:15:52.754732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.374 [2024-12-09 04:15:52.767828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.374 [2024-12-09 04:15:52.767860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.374 [2024-12-09 04:15:52.767878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.374 [2024-12-09 04:15:52.779749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.374 [2024-12-09 04:15:52.779778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.374 [2024-12-09 04:15:52.779794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.374 [2024-12-09 04:15:52.792597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.374 [2024-12-09 04:15:52.792629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.374 [2024-12-09 04:15:52.792647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.374 [2024-12-09 04:15:52.807854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.374 [2024-12-09 04:15:52.807885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.374 [2024-12-09 04:15:52.807902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.374 [2024-12-09 04:15:52.818910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.374 [2024-12-09 04:15:52.818939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.374 [2024-12-09 04:15:52.818955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.374 [2024-12-09 04:15:52.832988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.374 [2024-12-09 04:15:52.833020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.374 [2024-12-09 04:15:52.833053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.374 [2024-12-09 04:15:52.847630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.374 [2024-12-09 04:15:52.847660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.374 [2024-12-09 04:15:52.847676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.374 [2024-12-09 04:15:52.862279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.374 [2024-12-09 04:15:52.862311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.374 [2024-12-09 04:15:52.862328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.374 [2024-12-09 04:15:52.873887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.374 [2024-12-09 04:15:52.873919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.374 [2024-12-09 04:15:52.873953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.374 [2024-12-09 04:15:52.888574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.374 [2024-12-09 04:15:52.888605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.374 [2024-12-09 04:15:52.888621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.374 [2024-12-09 04:15:52.904140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.374 [2024-12-09 04:15:52.904173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:1351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.374 [2024-12-09 04:15:52.904191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.374 [2024-12-09 04:15:52.919632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.374 [2024-12-09 04:15:52.919665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.374 [2024-12-09 04:15:52.919691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.374 [2024-12-09 04:15:52.934083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.374 [2024-12-09 04:15:52.934115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.374 [2024-12-09 04:15:52.934134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.374 [2024-12-09 04:15:52.945401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.374 [2024-12-09 04:15:52.945433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.374 [2024-12-09 04:15:52.945450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.633 [2024-12-09 04:15:52.960583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.633 [2024-12-09 04:15:52.960614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.633 [2024-12-09 04:15:52.960631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.633 [2024-12-09 04:15:52.975024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.633 [2024-12-09 04:15:52.975056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.633 [2024-12-09 04:15:52.975074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.633 [2024-12-09 04:15:52.986794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.633 [2024-12-09 04:15:52.986826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.633 [2024-12-09 04:15:52.986843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.633 [2024-12-09 04:15:53.001624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.633 [2024-12-09 04:15:53.001656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.633 [2024-12-09 04:15:53.001688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.633 [2024-12-09 04:15:53.016576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.633 [2024-12-09 04:15:53.016608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.633 [2024-12-09 04:15:53.016626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.633 [2024-12-09 04:15:53.028450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.633 [2024-12-09 04:15:53.028481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.633 [2024-12-09 04:15:53.028498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.633 [2024-12-09 04:15:53.043428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.633 [2024-12-09 04:15:53.043468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.633 [2024-12-09 04:15:53.043487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.633 [2024-12-09 04:15:53.056566] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.633 [2024-12-09 04:15:53.056613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.633 [2024-12-09 04:15:53.056630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.633 [2024-12-09 04:15:53.070694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.633 [2024-12-09 04:15:53.070725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.633 [2024-12-09 04:15:53.070757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.633 [2024-12-09 04:15:53.083220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.633 [2024-12-09 04:15:53.083265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.633 [2024-12-09 04:15:53.083294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.633 [2024-12-09 04:15:53.098080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.633 [2024-12-09 04:15:53.098112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.633 [2024-12-09 04:15:53.098130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.633 [2024-12-09 04:15:53.112053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.633 [2024-12-09 04:15:53.112081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:21886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.633 [2024-12-09 04:15:53.112097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.633 [2024-12-09 04:15:53.126125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.633 [2024-12-09 04:15:53.126157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.633 [2024-12-09 04:15:53.126174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.633 [2024-12-09 04:15:53.139736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.633 [2024-12-09 04:15:53.139768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.633 [2024-12-09 04:15:53.139785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.633 [2024-12-09 04:15:53.151090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.633 [2024-12-09 04:15:53.151121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.633 [2024-12-09 04:15:53.151138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.633 [2024-12-09 04:15:53.164761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.633 [2024-12-09 04:15:53.164791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.633 [2024-12-09 04:15:53.164808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.633 [2024-12-09 04:15:53.177303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.633 [2024-12-09 04:15:53.177334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.633 [2024-12-09 04:15:53.177351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.633 [2024-12-09 04:15:53.189672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.633 [2024-12-09 04:15:53.189716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.633 [2024-12-09 04:15:53.189732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.633 [2024-12-09 04:15:53.202955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.633 [2024-12-09 04:15:53.202987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.633 [2024-12-09 04:15:53.203022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.893 [2024-12-09 04:15:53.218848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.893 [2024-12-09 04:15:53.218879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.893 [2024-12-09 04:15:53.218895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.893 [2024-12-09 04:15:53.233803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.893 [2024-12-09 04:15:53.233849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.893 [2024-12-09 04:15:53.233866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.893 [2024-12-09 04:15:53.244899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.893 [2024-12-09 04:15:53.244928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.893 [2024-12-09 04:15:53.244944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.893 [2024-12-09 04:15:53.260822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.893 [2024-12-09 04:15:53.260853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.893 [2024-12-09 04:15:53.260869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.893 [2024-12-09 04:15:53.273564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.893 [2024-12-09 04:15:53.273595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.893 [2024-12-09 04:15:53.273619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.893 [2024-12-09 04:15:53.286683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.893 [2024-12-09 04:15:53.286716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11496 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.893 [2024-12-09 04:15:53.286734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.893 [2024-12-09 04:15:53.299002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.893 [2024-12-09 04:15:53.299031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:21736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.893 [2024-12-09 04:15:53.299063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.893 [2024-12-09 04:15:53.310431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.893 [2024-12-09 04:15:53.310463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.893 [2024-12-09 04:15:53.310481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.893 [2024-12-09 04:15:53.323642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.893 [2024-12-09 04:15:53.323674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.893 [2024-12-09 04:15:53.323691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.893 [2024-12-09 04:15:53.336201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.893 [2024-12-09 04:15:53.336245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.893 [2024-12-09 04:15:53.336262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.893 [2024-12-09 04:15:53.349998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.893 [2024-12-09 04:15:53.350029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.893 [2024-12-09 04:15:53.350046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.893 [2024-12-09 04:15:53.363563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.893 [2024-12-09 04:15:53.363594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.893 [2024-12-09 04:15:53.363611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.893 [2024-12-09 04:15:53.377675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.893 [2024-12-09 04:15:53.377707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.893 [2024-12-09 04:15:53.377724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.893 [2024-12-09 04:15:53.388811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.893 [2024-12-09 04:15:53.388855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.893 [2024-12-09 04:15:53.388874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.893 [2024-12-09 04:15:53.404772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.893 [2024-12-09 04:15:53.404803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:4812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.893 [2024-12-09 04:15:53.404820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.893 [2024-12-09 04:15:53.417908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.893 [2024-12-09 04:15:53.417940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.893 [2024-12-09 04:15:53.417958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.893 [2024-12-09 04:15:53.431625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.893 [2024-12-09 04:15:53.431656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.893 [2024-12-09 04:15:53.431674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.893 [2024-12-09 04:15:53.445801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.893 [2024-12-09 04:15:53.445833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.894 [2024-12-09 04:15:53.445851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:24.894 [2024-12-09 04:15:53.457185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:24.894 [2024-12-09 04:15:53.457214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:24.894 [2024-12-09 04:15:53.457229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.152 [2024-12-09 04:15:53.472526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:25.152 [2024-12-09 04:15:53.472559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.152 [2024-12-09 04:15:53.472577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.152 [2024-12-09 04:15:53.487562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:25.152 [2024-12-09 04:15:53.487593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.152 [2024-12-09 04:15:53.487610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.152 [2024-12-09 04:15:53.502340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:25.152 [2024-12-09 04:15:53.502372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.152 [2024-12-09 04:15:53.502397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.152 [2024-12-09 04:15:53.517973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:25.152 [2024-12-09 04:15:53.518005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:21341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.152 [2024-12-09 04:15:53.518023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.152 [2024-12-09 04:15:53.528953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:25.152 [2024-12-09 04:15:53.528985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.152 [2024-12-09 04:15:53.529003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.152 [2024-12-09 04:15:53.542851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:25.152 [2024-12-09 04:15:53.542881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.152 [2024-12-09 04:15:53.542896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.152 [2024-12-09 04:15:53.555905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:25.152 [2024-12-09 04:15:53.555935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.152 [2024-12-09 04:15:53.555952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.152 18728.00 IOPS, 73.16 MiB/s [2024-12-09T03:15:53.728Z] [2024-12-09 04:15:53.569024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x89b420) 00:25:25.152 [2024-12-09 04:15:53.569067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:25.152 [2024-12-09 04:15:53.569084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:25.152 00:25:25.152 Latency(us) 00:25:25.152 [2024-12-09T03:15:53.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.152 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:25.152 nvme0n1 : 2.01 18741.92 73.21 0.00 0.00 6821.02 3276.80 23301.69 00:25:25.152 [2024-12-09T03:15:53.728Z] =================================================================================================================== 00:25:25.152 [2024-12-09T03:15:53.728Z] Total : 18741.92 73.21 0.00 0.00 6821.02 3276.80 23301.69 00:25:25.152 { 00:25:25.152 "results": [ 00:25:25.152 { 00:25:25.152 "job": "nvme0n1", 00:25:25.152 "core_mask": "0x2", 00:25:25.152 "workload": "randread", 00:25:25.152 "status": "finished", 00:25:25.152 "queue_depth": 128, 00:25:25.152 "io_size": 4096, 00:25:25.152 "runtime": 2.005931, 00:25:25.152 "iops": 18741.92083376746, 00:25:25.152 "mibps": 73.21062825690414, 00:25:25.152 "io_failed": 0, 00:25:25.152 "io_timeout": 0, 00:25:25.152 "avg_latency_us": 6821.022054114761, 00:25:25.152 "min_latency_us": 3276.8, 00:25:25.152 "max_latency_us": 23301.68888888889 00:25:25.152 } 00:25:25.152 ], 00:25:25.152 "core_count": 1 00:25:25.152 } 00:25:25.152 04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:25.152 04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:25.152 04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:25.152 | .driver_specific 00:25:25.152 | .nvme_error 00:25:25.152 | .status_code 00:25:25.152 | .command_transient_transport_error' 00:25:25.152 04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:25.410 04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 147 > 0 )) 00:25:25.410 04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 337238 00:25:25.410 04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 337238 ']' 00:25:25.410 04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 337238 00:25:25.410 04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:25.410 04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:25.410 04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 337238 00:25:25.410 04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:25.410 04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:25.410 04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 337238' 00:25:25.410 killing process with pid 337238 00:25:25.410 04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 337238 00:25:25.410 Received shutdown signal, test time was about 2.000000 seconds 00:25:25.410 00:25:25.410 Latency(us) 00:25:25.410 [2024-12-09T03:15:53.986Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.410 [2024-12-09T03:15:53.986Z] =================================================================================================================== 00:25:25.410 [2024-12-09T03:15:53.986Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:25.410 04:15:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 337238 00:25:25.668 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:25.668 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:25.668 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:25.668 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:25.668 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:25.668 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=337707 00:25:25.668 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:25.668 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 337707 /var/tmp/bperf.sock 00:25:25.668 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 337707 ']' 00:25:25.668 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:25.668 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:25.668 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:25.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:25.668 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:25.668 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:25.668 [2024-12-09 04:15:54.173419] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:25:25.668 [2024-12-09 04:15:54.173512] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid337707 ] 00:25:25.668 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:25.668 Zero copy mechanism will not be used. 00:25:25.668 [2024-12-09 04:15:54.240724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:25.926 [2024-12-09 04:15:54.297388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:25.926 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:25.926 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:25.926 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:25.926 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:26.184 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:26.184 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.184 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:26.184 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.184 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:26.184 04:15:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:26.748 nvme0n1 00:25:26.748 04:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:26.748 04:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.748 04:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:26.748 04:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.748 04:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:26.748 04:15:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:26.748 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:26.748 Zero copy mechanism will not be used. 00:25:26.748 Running I/O for 2 seconds... 00:25:26.748 [2024-12-09 04:15:55.279767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:26.748 [2024-12-09 04:15:55.279825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.748 [2024-12-09 04:15:55.279846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.748 [2024-12-09 04:15:55.285519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:26.748 [2024-12-09 04:15:55.285556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.748 [2024-12-09 04:15:55.285575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.748 [2024-12-09 04:15:55.291995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:26.748 [2024-12-09 04:15:55.292044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.748 [2024-12-09 04:15:55.292063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.748 [2024-12-09 04:15:55.296101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:26.748 [2024-12-09 04:15:55.296134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.748 [2024-12-09 04:15:55.296153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:26.748 [2024-12-09 04:15:55.303522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:26.748 [2024-12-09 04:15:55.303555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.748 [2024-12-09 04:15:55.303573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:26.748 [2024-12-09 04:15:55.310793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:26.748 [2024-12-09 04:15:55.310827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.748 [2024-12-09 04:15:55.310844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:26.748 [2024-12-09 04:15:55.316415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:26.748 [2024-12-09 04:15:55.316448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.748 [2024-12-09 04:15:55.316466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:26.748 [2024-12-09 04:15:55.321903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:26.749 [2024-12-09 04:15:55.321950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:26.749 [2024-12-09 04:15:55.321968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.006 [2024-12-09 04:15:55.327421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.006 [2024-12-09 04:15:55.327487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.006 [2024-12-09 04:15:55.327505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.006 [2024-12-09 04:15:55.332349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.006 [2024-12-09 04:15:55.332381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.006 [2024-12-09 04:15:55.332399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.006 [2024-12-09 04:15:55.337075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.006 [2024-12-09 04:15:55.337106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.006 [2024-12-09 04:15:55.337138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.006 [2024-12-09 04:15:55.341927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.006 [2024-12-09 04:15:55.341959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.006 [2024-12-09 04:15:55.341992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.006 [2024-12-09 04:15:55.346672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.006 [2024-12-09 04:15:55.346703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.006 [2024-12-09 04:15:55.346735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.006 [2024-12-09 04:15:55.351390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.006 [2024-12-09 04:15:55.351425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.006 [2024-12-09 04:15:55.351442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.006 [2024-12-09 04:15:55.356081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.006 [2024-12-09 04:15:55.356132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.006 [2024-12-09 04:15:55.356149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.006 [2024-12-09 04:15:55.360793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.006 [2024-12-09 04:15:55.360839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.006 [2024-12-09 04:15:55.360855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.006 [2024-12-09 04:15:55.365848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.006 [2024-12-09 04:15:55.365880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.006 [2024-12-09 04:15:55.365910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.006 [2024-12-09 04:15:55.371030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.006 [2024-12-09 04:15:55.371077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.006 [2024-12-09 04:15:55.371094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.006 [2024-12-09 04:15:55.375976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.006 [2024-12-09 04:15:55.376022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.006 [2024-12-09 04:15:55.376039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.006 [2024-12-09 04:15:55.380678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.006 [2024-12-09 04:15:55.380709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.380733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.386167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.386197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.386214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.391818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.391867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.391885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.397677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.397709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.397726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.404230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.404283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.404302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.409750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.409782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.409813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.415207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.415239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.415279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.420133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.420163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.420181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.424772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.424803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.424834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.429429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.429465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.429484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.433997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.434028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.434045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.438749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.438795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.438812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.443411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.443442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.443459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.448232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.448285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.448304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.453658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.453703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.453719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.458502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.458534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.458551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.463207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.463253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.463269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.467827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.467872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.467889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.472316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.472348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.472365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.475397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.475429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.475447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.479030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.479060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.479077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.482418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.482451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.482469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.485480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.485511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.485529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.488866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.488899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.488917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.492890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.492921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.492938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.497918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.497951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.497969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.503956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.503988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.007 [2024-12-09 04:15:55.504026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.007 [2024-12-09 04:15:55.511719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.007 [2024-12-09 04:15:55.511751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.008 [2024-12-09 04:15:55.511784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.008 [2024-12-09 04:15:55.517716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.008 [2024-12-09 04:15:55.517748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.008 [2024-12-09 04:15:55.517766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.008 [2024-12-09 04:15:55.520980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.008 [2024-12-09 04:15:55.521011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.008 [2024-12-09 04:15:55.521029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.008 [2024-12-09 04:15:55.525667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.008 [2024-12-09 04:15:55.525699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.008 [2024-12-09 04:15:55.525717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.008 [2024-12-09 04:15:55.529553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.008 [2024-12-09 04:15:55.529584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.008 [2024-12-09 04:15:55.529601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.008 [2024-12-09 04:15:55.532921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.008 [2024-12-09 04:15:55.532952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.008 [2024-12-09 04:15:55.532969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.008 [2024-12-09 04:15:55.536550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.008 [2024-12-09 04:15:55.536581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.008 [2024-12-09 04:15:55.536598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.008 [2024-12-09 04:15:55.540202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.008 [2024-12-09 04:15:55.540234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.008 [2024-12-09 04:15:55.540252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.008 [2024-12-09 04:15:55.543142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.008 [2024-12-09 04:15:55.543170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.008 [2024-12-09 04:15:55.543187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.008 [2024-12-09 04:15:55.547701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.008 [2024-12-09 04:15:55.547732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.008 [2024-12-09 04:15:55.547749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.008 [2024-12-09 04:15:55.553091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.008 [2024-12-09 04:15:55.553121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.008 [2024-12-09 04:15:55.553138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.008 [2024-12-09 04:15:55.560075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.008 [2024-12-09 04:15:55.560107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.008 [2024-12-09 04:15:55.560125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.008 [2024-12-09 04:15:55.566827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.008 [2024-12-09 04:15:55.566858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.008 [2024-12-09 04:15:55.566891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.008 [2024-12-09 04:15:55.572417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.008 [2024-12-09 04:15:55.572450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.008 [2024-12-09 04:15:55.572467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.008 [2024-12-09 04:15:55.577991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.008 [2024-12-09 04:15:55.578021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.008 [2024-12-09 04:15:55.578038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.266 [2024-12-09 04:15:55.583419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.266 [2024-12-09 04:15:55.583450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.266 [2024-12-09 04:15:55.583468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.266 [2024-12-09 04:15:55.588006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.266 [2024-12-09 04:15:55.588037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.266 [2024-12-09 04:15:55.588059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.266 [2024-12-09 04:15:55.592684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.266 [2024-12-09 04:15:55.592715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.266 [2024-12-09 04:15:55.592732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.266 [2024-12-09 04:15:55.597306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.266 [2024-12-09 04:15:55.597337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.266 [2024-12-09 04:15:55.597355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.266 [2024-12-09 04:15:55.603242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.266 [2024-12-09 04:15:55.603298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.266 [2024-12-09 04:15:55.603317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.266 [2024-12-09 04:15:55.608159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.266 [2024-12-09 04:15:55.608190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.266 [2024-12-09 04:15:55.608207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.266 [2024-12-09 04:15:55.612862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.266 [2024-12-09 04:15:55.612892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.266 [2024-12-09 04:15:55.612909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.266 [2024-12-09 04:15:55.618545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.266 [2024-12-09 04:15:55.618577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.266 [2024-12-09 04:15:55.618609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.266 [2024-12-09 04:15:55.623800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.266 [2024-12-09 04:15:55.623832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.266 [2024-12-09 04:15:55.623849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.266 [2024-12-09 04:15:55.629159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.629204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.629221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.634796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.634847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.634864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.641754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.641785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.641801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.647063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.647093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.647110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.652358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.652389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.652405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.656804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.656848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.656863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.661551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.661596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.661613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.666635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.666666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.666684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.672869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.672914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.672931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.680687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.680718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.680751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.686823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.686869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.686888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.693909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.693955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.693972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.701588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.701621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.701639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.708361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.708394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.708412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.715741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.715774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.715792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.723725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.723772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.723790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.731657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.731690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.731723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.739330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.739363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.739382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.747055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.747088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.747112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.754656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.754688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.754706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.760367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.760398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.760416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.765166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.765197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.765215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.770340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.267 [2024-12-09 04:15:55.770371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.267 [2024-12-09 04:15:55.770389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.267 [2024-12-09 04:15:55.775396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.268 [2024-12-09 04:15:55.775428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.268 [2024-12-09 04:15:55.775446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.268 [2024-12-09 04:15:55.779903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.268 [2024-12-09 04:15:55.779934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.268 [2024-12-09 04:15:55.779951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.268 [2024-12-09 04:15:55.784395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.268 [2024-12-09 04:15:55.784426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.268 [2024-12-09 04:15:55.784444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.268 [2024-12-09 04:15:55.788878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.268 [2024-12-09 04:15:55.788908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.268 [2024-12-09 04:15:55.788926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.268 [2024-12-09 04:15:55.793602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.268 [2024-12-09 04:15:55.793640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.268 [2024-12-09 04:15:55.793658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.268 [2024-12-09 04:15:55.798718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.268 [2024-12-09 04:15:55.798749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.268 [2024-12-09 04:15:55.798766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.268 [2024-12-09 04:15:55.803581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.268 [2024-12-09 04:15:55.803615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.268 [2024-12-09 04:15:55.803649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.268 [2024-12-09 04:15:55.808299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.268 [2024-12-09 04:15:55.808331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.268 [2024-12-09 04:15:55.808349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.268 [2024-12-09 04:15:55.812996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.268 [2024-12-09 04:15:55.813042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.268 [2024-12-09 04:15:55.813060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.268 [2024-12-09 04:15:55.817569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.268 [2024-12-09 04:15:55.817614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.268 [2024-12-09 04:15:55.817630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.268 [2024-12-09 04:15:55.822236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.268 [2024-12-09 04:15:55.822267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.268 [2024-12-09 04:15:55.822309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.268 [2024-12-09 04:15:55.826800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.268 [2024-12-09 04:15:55.826843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.268 [2024-12-09 04:15:55.826860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.268 [2024-12-09 04:15:55.831545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.268 [2024-12-09 04:15:55.831594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.268 [2024-12-09 04:15:55.831611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.268 [2024-12-09 04:15:55.836745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.268 [2024-12-09 04:15:55.836777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.268 [2024-12-09 04:15:55.836814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.526 [2024-12-09 04:15:55.841845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.841878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.841896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.846549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.846580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.846612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.851729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.851759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.851776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.858107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.858154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.858173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.865683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.865715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.865733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.871234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.871266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.871291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.877515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.877547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.877565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.883981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.884020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.884039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.891468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.891501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.891520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.899754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.899788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.899806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.905755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.905789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.905808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.911100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.911131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.911148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.916434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.916465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.916483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.921826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.921858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.921876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.927135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.927167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.927184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.932854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.932887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.932906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.938533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.938565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.938582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.944654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.944687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.944706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.951114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.951147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.951165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.954952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.954985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.955003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.959422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.959453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.959470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.963108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.963138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.963154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.967287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.967319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.967336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.971866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.971896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.971928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.976454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.976484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.976511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.981129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.527 [2024-12-09 04:15:55.981174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.527 [2024-12-09 04:15:55.981190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.527 [2024-12-09 04:15:55.985850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:55.985894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:55.985911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.528 [2024-12-09 04:15:55.990523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:55.990570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:55.990586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.528 [2024-12-09 04:15:55.995267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:55.995304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:55.995321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.528 [2024-12-09 04:15:55.999951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:55.999982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:56.000001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.528 [2024-12-09 04:15:56.004518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:56.004548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:56.004565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.528 [2024-12-09 04:15:56.010119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:56.010151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:56.010169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.528 [2024-12-09 04:15:56.014875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:56.014906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:56.014924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.528 [2024-12-09 04:15:56.019669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:56.019721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:56.019738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.528 [2024-12-09 04:15:56.024375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:56.024405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:56.024422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.528 [2024-12-09 04:15:56.029140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:56.029170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:56.029187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.528 [2024-12-09 04:15:56.033752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:56.033783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:56.033801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.528 [2024-12-09 04:15:56.038906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:56.038938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:56.038956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.528 [2024-12-09 04:15:56.045117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:56.045149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:56.045167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.528 [2024-12-09 04:15:56.052720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:56.052752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:56.052770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.528 [2024-12-09 04:15:56.058441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:56.058473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:56.058490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.528 [2024-12-09 04:15:56.063904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:56.063935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:56.063953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.528 [2024-12-09 04:15:56.069858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:56.069890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:56.069923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.528 [2024-12-09 04:15:56.075909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:56.075957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:56.075976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.528 [2024-12-09 04:15:56.081984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:56.082030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:56.082047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.528 [2024-12-09 04:15:56.087811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:56.087844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:56.087862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.528 [2024-12-09 04:15:56.094188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:56.094221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:56.094239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.528 [2024-12-09 04:15:56.099960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.528 [2024-12-09 04:15:56.099993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.528 [2024-12-09 04:15:56.100011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.786 [2024-12-09 04:15:56.104540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.786 [2024-12-09 04:15:56.104572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.786 [2024-12-09 04:15:56.104589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.786 [2024-12-09 04:15:56.109076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.786 [2024-12-09 04:15:56.109106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.109122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.113587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.113618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.113640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.118263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.118301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.118319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.123723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.123754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.123772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.130963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.131010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.131027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.138486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.138517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.138535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.146073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.146105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.146122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.153676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.153709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.153727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.161290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.161334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.161352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.169096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.169130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.169148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.176588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.176621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.176639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.184540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.184574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.184608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.192594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.192627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.192645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.200796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.200830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.200848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.209074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.209107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.209126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.217117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.217149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.217183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.225655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.225688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.225706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.234070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.234104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.234122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.241610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.241644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.241669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.247324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.247357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.247374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.252542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.252589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.252607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.258238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.258270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.258298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.263652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.263683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.263701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.268672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.268704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.268721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.787 5578.00 IOPS, 697.25 MiB/s [2024-12-09T03:15:56.363Z] [2024-12-09 04:15:56.275734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.275777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.275793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.281145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.787 [2024-12-09 04:15:56.281177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.787 [2024-12-09 04:15:56.281194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.787 [2024-12-09 04:15:56.287614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.788 [2024-12-09 04:15:56.287646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-12-09 04:15:56.287678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.788 [2024-12-09 04:15:56.294743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.788 [2024-12-09 04:15:56.294783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-12-09 04:15:56.294803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.788 [2024-12-09 04:15:56.300649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.788 [2024-12-09 04:15:56.300682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-12-09 04:15:56.300701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.788 [2024-12-09 04:15:56.307033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.788 [2024-12-09 04:15:56.307065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-12-09 04:15:56.307084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.788 [2024-12-09 04:15:56.313488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.788 [2024-12-09 04:15:56.313521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-12-09 04:15:56.313539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.788 [2024-12-09 04:15:56.319568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.788 [2024-12-09 04:15:56.319601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-12-09 04:15:56.319619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.788 [2024-12-09 04:15:56.324962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.788 [2024-12-09 04:15:56.324996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-12-09 04:15:56.325013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.788 [2024-12-09 04:15:56.330348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.788 [2024-12-09 04:15:56.330380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-12-09 04:15:56.330398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.788 [2024-12-09 04:15:56.335445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.788 [2024-12-09 04:15:56.335477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-12-09 04:15:56.335495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.788 [2024-12-09 04:15:56.340604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.788 [2024-12-09 04:15:56.340637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-12-09 04:15:56.340656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.788 [2024-12-09 04:15:56.343483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.788 [2024-12-09 04:15:56.343514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-12-09 04:15:56.343532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:27.788 [2024-12-09 04:15:56.347252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.788 [2024-12-09 04:15:56.347291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-12-09 04:15:56.347311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:27.788 [2024-12-09 04:15:56.351782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.788 [2024-12-09 04:15:56.351812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-12-09 04:15:56.351829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:27.788 [2024-12-09 04:15:56.356465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.788 [2024-12-09 04:15:56.356497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-12-09 04:15:56.356516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:27.788 [2024-12-09 04:15:56.361476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:27.788 [2024-12-09 04:15:56.361511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:27.788 [2024-12-09 04:15:56.361529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.047 [2024-12-09 04:15:56.366220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.047 [2024-12-09 04:15:56.366251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.047 [2024-12-09 04:15:56.366269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.047 [2024-12-09 04:15:56.370935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.047 [2024-12-09 04:15:56.370968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.047 [2024-12-09 04:15:56.371000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.047 [2024-12-09 04:15:56.375850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.047 [2024-12-09 04:15:56.375881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.047 [2024-12-09 04:15:56.375899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.047 [2024-12-09 04:15:56.380498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.047 [2024-12-09 04:15:56.380529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.047 [2024-12-09 04:15:56.380567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.047 [2024-12-09 04:15:56.385307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.047 [2024-12-09 04:15:56.385338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.047 [2024-12-09 04:15:56.385355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.047 [2024-12-09 04:15:56.389908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.047 [2024-12-09 04:15:56.389937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.047 [2024-12-09 04:15:56.389954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.047 [2024-12-09 04:15:56.394995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.047 [2024-12-09 04:15:56.395042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.047 [2024-12-09 04:15:56.395061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.047 [2024-12-09 04:15:56.401442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.047 [2024-12-09 04:15:56.401475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.047 [2024-12-09 04:15:56.401494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.047 [2024-12-09 04:15:56.407734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.047 [2024-12-09 04:15:56.407767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.047 [2024-12-09 04:15:56.407799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.047 [2024-12-09 04:15:56.413793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.047 [2024-12-09 04:15:56.413827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.047 [2024-12-09 04:15:56.413845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.047 [2024-12-09 04:15:56.420749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.047 [2024-12-09 04:15:56.420797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.047 [2024-12-09 04:15:56.420815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.047 [2024-12-09 04:15:56.426472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.047 [2024-12-09 04:15:56.426504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.047 [2024-12-09 04:15:56.426523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.047 [2024-12-09 04:15:56.432516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.047 [2024-12-09 04:15:56.432549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.047 [2024-12-09 04:15:56.432566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.047 [2024-12-09 04:15:56.438594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.047 [2024-12-09 04:15:56.438628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.047 [2024-12-09 04:15:56.438659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.047 [2024-12-09 04:15:56.444290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.047 [2024-12-09 04:15:56.444323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.047 [2024-12-09 04:15:56.444342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.047 [2024-12-09 04:15:56.450218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.047 [2024-12-09 04:15:56.450250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.047 [2024-12-09 04:15:56.450292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.047 [2024-12-09 04:15:56.455872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.047 [2024-12-09 04:15:56.455903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.047 [2024-12-09 04:15:56.455921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.047 [2024-12-09 04:15:56.461473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.047 [2024-12-09 04:15:56.461506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.047 [2024-12-09 04:15:56.461523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.047 [2024-12-09 04:15:56.467425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.047 [2024-12-09 04:15:56.467457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.047 [2024-12-09 04:15:56.467475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.047 [2024-12-09 04:15:56.474553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.047 [2024-12-09 04:15:56.474599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.047 [2024-12-09 04:15:56.474617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.047 [2024-12-09 04:15:56.480727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.047 [2024-12-09 04:15:56.480759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.047 [2024-12-09 04:15:56.480783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.486829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.486876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.486894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.492608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.492641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.492672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.497987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.498035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.498052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.504024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.504056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.504074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.510375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.510407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.510425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.515193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.515225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.515243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.519757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.519787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.519804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.524765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.524796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.524829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.530122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.530159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.530192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.534858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.534903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.534920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.539693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.539723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.539759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.544399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.544430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.544448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.549063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.549094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.549110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.553764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.553795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.553813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.558412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.558442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.558460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.563029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.563060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.563077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.568190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.568220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.568237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.573363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.573394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.573411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.578066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.578112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.578129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.582864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.582895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.582913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.587689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.587720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.587737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.592867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.592899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.592916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.598579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.598611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.598628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.604113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.604145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.604163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.609519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.609551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.609569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.614491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.614524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.614547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.048 [2024-12-09 04:15:56.617322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.048 [2024-12-09 04:15:56.617354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.048 [2024-12-09 04:15:56.617372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.049 [2024-12-09 04:15:56.621411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.049 [2024-12-09 04:15:56.621442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.049 [2024-12-09 04:15:56.621460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.307 [2024-12-09 04:15:56.625617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.307 [2024-12-09 04:15:56.625648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.307 [2024-12-09 04:15:56.625666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.307 [2024-12-09 04:15:56.628737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.307 [2024-12-09 04:15:56.628766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.307 [2024-12-09 04:15:56.628783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.307 [2024-12-09 04:15:56.633561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.307 [2024-12-09 04:15:56.633607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.307 [2024-12-09 04:15:56.633624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.307 [2024-12-09 04:15:56.638616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.307 [2024-12-09 04:15:56.638646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.307 [2024-12-09 04:15:56.638662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.307 [2024-12-09 04:15:56.643981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.307 [2024-12-09 04:15:56.644014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.307 [2024-12-09 04:15:56.644032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.307 [2024-12-09 04:15:56.649734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.307 [2024-12-09 04:15:56.649766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.307 [2024-12-09 04:15:56.649784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.307 [2024-12-09 04:15:56.654659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.307 [2024-12-09 04:15:56.654693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.307 [2024-12-09 04:15:56.654711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.307 [2024-12-09 04:15:56.659649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.307 [2024-12-09 04:15:56.659679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.307 [2024-12-09 04:15:56.659696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.307 [2024-12-09 04:15:56.664494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.307 [2024-12-09 04:15:56.664525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.307 [2024-12-09 04:15:56.664542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.307 [2024-12-09 04:15:56.669034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.307 [2024-12-09 04:15:56.669064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.669081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.674418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.674448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.674466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.678245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.678297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.678316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.683302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.683333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.683350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.687004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.687034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.687051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.691472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.691509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.691538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.695981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.696009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.696026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.700514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.700543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.700560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.705121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.705150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.705166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.710576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.710622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.710639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.714522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.714553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.714571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.719106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.719137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.719169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.723654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.723699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.723716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.728403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.728434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.728451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.733707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.733741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.733759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.737608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.737637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.737654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.742503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.742535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.742552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.747940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.747971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.747988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.755227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.755279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.755313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.761382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.761414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.761432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.767643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.767673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.767690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.773387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.773418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.773435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.778996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.779026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.779043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.784160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.784191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.784209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.788872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.788902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.788919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.308 [2024-12-09 04:15:56.793530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.308 [2024-12-09 04:15:56.793580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.308 [2024-12-09 04:15:56.793598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.309 [2024-12-09 04:15:56.798423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.309 [2024-12-09 04:15:56.798455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.309 [2024-12-09 04:15:56.798472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.309 [2024-12-09 04:15:56.804348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.309 [2024-12-09 04:15:56.804380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.309 [2024-12-09 04:15:56.804398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.309 [2024-12-09 04:15:56.811816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.309 [2024-12-09 04:15:56.811847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.309 [2024-12-09 04:15:56.811864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.309 [2024-12-09 04:15:56.818286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.309 [2024-12-09 04:15:56.818332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.309 [2024-12-09 04:15:56.818350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.309 [2024-12-09 04:15:56.823747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.309 [2024-12-09 04:15:56.823779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.309 [2024-12-09 04:15:56.823797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.309 [2024-12-09 04:15:56.829604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.309 [2024-12-09 04:15:56.829650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.309 [2024-12-09 04:15:56.829674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.309 [2024-12-09 04:15:56.835967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.309 [2024-12-09 04:15:56.836015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.309 [2024-12-09 04:15:56.836039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.309 [2024-12-09 04:15:56.840903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.309 [2024-12-09 04:15:56.840935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.309 [2024-12-09 04:15:56.840970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.309 [2024-12-09 04:15:56.846142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.309 [2024-12-09 04:15:56.846174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.309 [2024-12-09 04:15:56.846209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.309 [2024-12-09 04:15:56.850734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.309 [2024-12-09 04:15:56.850764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.309 [2024-12-09 04:15:56.850781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.309 [2024-12-09 04:15:56.856413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.309 [2024-12-09 04:15:56.856444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.309 [2024-12-09 04:15:56.856462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.309 [2024-12-09 04:15:56.863578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.309 [2024-12-09 04:15:56.863610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.309 [2024-12-09 04:15:56.863629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.309 [2024-12-09 04:15:56.870605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.309 [2024-12-09 04:15:56.870638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.309 [2024-12-09 04:15:56.870656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.309 [2024-12-09 04:15:56.876140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.309 [2024-12-09 04:15:56.876171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.309 [2024-12-09 04:15:56.876203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.309 [2024-12-09 04:15:56.881809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.309 [2024-12-09 04:15:56.881841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.309 [2024-12-09 04:15:56.881860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.568 [2024-12-09 04:15:56.886465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.568 [2024-12-09 04:15:56.886496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.568 [2024-12-09 04:15:56.886517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.568 [2024-12-09 04:15:56.891704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.568 [2024-12-09 04:15:56.891736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.568 [2024-12-09 04:15:56.891754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.568 [2024-12-09 04:15:56.896686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.568 [2024-12-09 04:15:56.896718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.568 [2024-12-09 04:15:56.896738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.568 [2024-12-09 04:15:56.901286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.568 [2024-12-09 04:15:56.901315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.568 [2024-12-09 04:15:56.901334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.568 [2024-12-09 04:15:56.905746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:56.905776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:56.905800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:56.910330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:56.910359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:56.910383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:56.914921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:56.914951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:56.914984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:56.919558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:56.919594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:56.919619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:56.924269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:56.924306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:56.924346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:56.928928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:56.928958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:56.928976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:56.933601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:56.933632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:56.933649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:56.938231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:56.938278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:56.938297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:56.942811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:56.942842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:56.942874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:56.947619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:56.947650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:56.947688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:56.952259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:56.952300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:56.952327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:56.956984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:56.957014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:56.957032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:56.961591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:56.961626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:56.961645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:56.966237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:56.966298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:56.966318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:56.970839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:56.970870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:56.970889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:56.975671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:56.975702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:56.975721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:56.980713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:56.980743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:56.980762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:56.985815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:56.985845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:56.985868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:56.991171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:56.991217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:56.991235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:56.996505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:56.996536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:56.996570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:57.001596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:57.001642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:57.001661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:57.007209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:57.007241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:57.007264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:57.012699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:57.012731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:57.012749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:57.019874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:57.019906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:57.019925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:57.027663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:57.027695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:57.027713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:57.034760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:57.034792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:57.034809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.569 [2024-12-09 04:15:57.039175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.569 [2024-12-09 04:15:57.039206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.569 [2024-12-09 04:15:57.039225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.570 [2024-12-09 04:15:57.046340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.570 [2024-12-09 04:15:57.046373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.570 [2024-12-09 04:15:57.046396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.570 [2024-12-09 04:15:57.052947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.570 [2024-12-09 04:15:57.052979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.570 [2024-12-09 04:15:57.052997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.570 [2024-12-09 04:15:57.058031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.570 [2024-12-09 04:15:57.058063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.570 [2024-12-09 04:15:57.058093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.570 [2024-12-09 04:15:57.062694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.570 [2024-12-09 04:15:57.062725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.570 [2024-12-09 04:15:57.062743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.570 [2024-12-09 04:15:57.067400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.570 [2024-12-09 04:15:57.067431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.570 [2024-12-09 04:15:57.067449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.570 [2024-12-09 04:15:57.072228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.570 [2024-12-09 04:15:57.072269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.570 [2024-12-09 04:15:57.072296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.570 [2024-12-09 04:15:57.076994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.570 [2024-12-09 04:15:57.077025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.570 [2024-12-09 04:15:57.077052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.570 [2024-12-09 04:15:57.082034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.570 [2024-12-09 04:15:57.082064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.570 [2024-12-09 04:15:57.082085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.570 [2024-12-09 04:15:57.087021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.570 [2024-12-09 04:15:57.087051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.570 [2024-12-09 04:15:57.087068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.570 [2024-12-09 04:15:57.091648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.570 [2024-12-09 04:15:57.091678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.570 [2024-12-09 04:15:57.091698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.570 [2024-12-09 04:15:57.096252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.570 [2024-12-09 04:15:57.096300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.570 [2024-12-09 04:15:57.096318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.570 [2024-12-09 04:15:57.100790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.570 [2024-12-09 04:15:57.100826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.570 [2024-12-09 04:15:57.100844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.570 [2024-12-09 04:15:57.105339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.570 [2024-12-09 04:15:57.105369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.570 [2024-12-09 04:15:57.105388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.570 [2024-12-09 04:15:57.110197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.570 [2024-12-09 04:15:57.110226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.570 [2024-12-09 04:15:57.110246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.570 [2024-12-09 04:15:57.114926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.570 [2024-12-09 04:15:57.114972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.570 [2024-12-09 04:15:57.114989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.570 [2024-12-09 04:15:57.120265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.570 [2024-12-09 04:15:57.120303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.570 [2024-12-09 04:15:57.120329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.570 [2024-12-09 04:15:57.124972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.570 [2024-12-09 04:15:57.125003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.570 [2024-12-09 04:15:57.125021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.570 [2024-12-09 04:15:57.129535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.570 [2024-12-09 04:15:57.129565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.570 [2024-12-09 04:15:57.129583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.570 [2024-12-09 04:15:57.134136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.570 [2024-12-09 04:15:57.134167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.570 [2024-12-09 04:15:57.134185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.570 [2024-12-09 04:15:57.138779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.570 [2024-12-09 04:15:57.138814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.570 [2024-12-09 04:15:57.138831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.828 [2024-12-09 04:15:57.143457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.828 [2024-12-09 04:15:57.143489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.828 [2024-12-09 04:15:57.143507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.828 [2024-12-09 04:15:57.148008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.828 [2024-12-09 04:15:57.148039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.148057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.152682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.152713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.152730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.157355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.157385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.157403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.162037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.162083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.162100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.167037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.167067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.167084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.171880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.171910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.171931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.176599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.176629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.176648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.182249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.182304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.182331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.189137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.189169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.189188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.196279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.196311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.196330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.202896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.202926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.202944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.208644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.208675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.208698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.214941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.214972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.214992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.218884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.218916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.218934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.222333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.222363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.222395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.226912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.226942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.226958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.231496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.231526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.231544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.235933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.235964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.235984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.241177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.241210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.241228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.246067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.246100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.246119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.250815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.250847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.250865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.255311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.255352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.255369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.259979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.260010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.260026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.265375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.265407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.265424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.829 [2024-12-09 04:15:57.271339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.271384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.271409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.829 5777.50 IOPS, 722.19 MiB/s [2024-12-09T03:15:57.405Z] [2024-12-09 04:15:57.279869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x135f9d0) 00:25:28.829 [2024-12-09 04:15:57.279902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.829 [2024-12-09 04:15:57.279920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.829 00:25:28.829 Latency(us) 00:25:28.829 [2024-12-09T03:15:57.405Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.829 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:28.829 nvme0n1 : 2.05 5657.87 707.23 0.00 0.00 2770.03 703.91 46797.56 00:25:28.829 [2024-12-09T03:15:57.406Z] =================================================================================================================== 00:25:28.830 [2024-12-09T03:15:57.406Z] Total : 5657.87 707.23 0.00 0.00 2770.03 703.91 46797.56 00:25:28.830 { 00:25:28.830 "results": [ 00:25:28.830 { 00:25:28.830 "job": "nvme0n1", 00:25:28.830 "core_mask": "0x2", 00:25:28.830 "workload": "randread", 00:25:28.830 "status": "finished", 00:25:28.830 "queue_depth": 16, 00:25:28.830 "io_size": 131072, 00:25:28.830 "runtime": 2.045117, 00:25:28.830 "iops": 5657.867007119886, 00:25:28.830 "mibps": 707.2333758899857, 00:25:28.830 "io_failed": 0, 00:25:28.830 "io_timeout": 0, 00:25:28.830 "avg_latency_us": 2770.0314899637347, 00:25:28.830 "min_latency_us": 703.9051851851851, 00:25:28.830 "max_latency_us": 46797.55851851852 00:25:28.830 } 00:25:28.830 ], 00:25:28.830 "core_count": 1 00:25:28.830 } 00:25:28.830 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:28.830 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:28.830 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:28.830 | .driver_specific 00:25:28.830 | .nvme_error 00:25:28.830 | .status_code 00:25:28.830 | .command_transient_transport_error' 00:25:28.830 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:29.088 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 374 > 0 )) 00:25:29.088 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 337707 00:25:29.088 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 337707 ']' 00:25:29.088 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 337707 00:25:29.088 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:29.088 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:29.088 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 337707 00:25:29.345 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:29.345 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:29.345 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 337707' 00:25:29.345 killing process with pid 337707 00:25:29.345 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 337707 00:25:29.345 Received shutdown signal, test time was about 2.000000 seconds 00:25:29.345 00:25:29.345 Latency(us) 00:25:29.345 [2024-12-09T03:15:57.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.345 [2024-12-09T03:15:57.921Z] =================================================================================================================== 00:25:29.345 [2024-12-09T03:15:57.921Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:29.345 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 337707 00:25:29.345 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:29.345 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:29.345 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:29.345 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:29.345 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:29.345 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=338184 00:25:29.345 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:29.345 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 338184 /var/tmp/bperf.sock 00:25:29.345 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 338184 ']' 00:25:29.345 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:29.345 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:29.345 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:29.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:29.345 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:29.345 04:15:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:29.603 [2024-12-09 04:15:57.937938] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:25:29.603 [2024-12-09 04:15:57.938026] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid338184 ] 00:25:29.603 [2024-12-09 04:15:58.004620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:29.603 [2024-12-09 04:15:58.059308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.603 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:29.603 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:29.603 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:29.603 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:30.166 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:30.166 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.166 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:30.166 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.166 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:30.166 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:30.423 nvme0n1 00:25:30.423 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:30.423 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.423 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:30.423 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.423 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:30.423 04:15:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:30.680 Running I/O for 2 seconds... 00:25:30.680 [2024-12-09 04:15:59.068289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016eefae0 00:25:30.680 [2024-12-09 04:15:59.069707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-12-09 04:15:59.069750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:30.680 [2024-12-09 04:15:59.082815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ee2c28 00:25:30.680 [2024-12-09 04:15:59.084731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-12-09 04:15:59.084763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:30.680 [2024-12-09 04:15:59.091191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016eeb328 00:25:30.680 [2024-12-09 04:15:59.092310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-12-09 04:15:59.092341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:30.680 [2024-12-09 04:15:59.105752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016eff3c8 00:25:30.680 [2024-12-09 04:15:59.107392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-12-09 04:15:59.107425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:30.680 [2024-12-09 04:15:59.114284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:30.680 [2024-12-09 04:15:59.115083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-12-09 04:15:59.115116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:30.680 [2024-12-09 04:15:59.126413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ee8088 00:25:30.680 [2024-12-09 04:15:59.127206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-12-09 04:15:59.127237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:30.680 [2024-12-09 04:15:59.140888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016eefae0 00:25:30.680 [2024-12-09 04:15:59.142488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-12-09 04:15:59.142537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:30.680 [2024-12-09 04:15:59.152166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef0bc0 00:25:30.680 [2024-12-09 04:15:59.153523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-12-09 04:15:59.153554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:30.680 [2024-12-09 04:15:59.164492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016edece0 00:25:30.680 [2024-12-09 04:15:59.166098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11139 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-12-09 04:15:59.166147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:30.680 [2024-12-09 04:15:59.175337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016efd640 00:25:30.680 [2024-12-09 04:15:59.177206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-12-09 04:15:59.177236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:30.680 [2024-12-09 04:15:59.187795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ee0ea0 00:25:30.680 [2024-12-09 04:15:59.188851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-12-09 04:15:59.188882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:30.680 [2024-12-09 04:15:59.200062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016eeee38 00:25:30.680 [2024-12-09 04:15:59.201541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:22640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-12-09 04:15:59.201582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:30.680 [2024-12-09 04:15:59.211964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ee1b48 00:25:30.680 [2024-12-09 04:15:59.213108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-12-09 04:15:59.213141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:30.680 [2024-12-09 04:15:59.222673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ee5658 00:25:30.680 [2024-12-09 04:15:59.224034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-12-09 04:15:59.224064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:30.680 [2024-12-09 04:15:59.235516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:30.680 [2024-12-09 04:15:59.235868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-12-09 04:15:59.235900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:30.680 [2024-12-09 04:15:59.249807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:30.680 [2024-12-09 04:15:59.250136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.680 [2024-12-09 04:15:59.250165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:30.937 [2024-12-09 04:15:59.263430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:30.937 [2024-12-09 04:15:59.263683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.937 [2024-12-09 04:15:59.263730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:30.937 [2024-12-09 04:15:59.277646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:30.937 [2024-12-09 04:15:59.277909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.937 [2024-12-09 04:15:59.277938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:30.937 [2024-12-09 04:15:59.291967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:30.937 [2024-12-09 04:15:59.292239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.937 [2024-12-09 04:15:59.292293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:30.937 [2024-12-09 04:15:59.306263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:30.937 [2024-12-09 04:15:59.306523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.937 [2024-12-09 04:15:59.306556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:30.937 [2024-12-09 04:15:59.320547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:30.937 [2024-12-09 04:15:59.320895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.937 [2024-12-09 04:15:59.320941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:30.937 [2024-12-09 04:15:59.334550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:30.937 [2024-12-09 04:15:59.334888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.937 [2024-12-09 04:15:59.334935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:30.937 [2024-12-09 04:15:59.348838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:30.937 [2024-12-09 04:15:59.349095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.937 [2024-12-09 04:15:59.349141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:30.938 [2024-12-09 04:15:59.363090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:30.938 [2024-12-09 04:15:59.363393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.938 [2024-12-09 04:15:59.363442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:30.938 [2024-12-09 04:15:59.377404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:30.938 [2024-12-09 04:15:59.377668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.938 [2024-12-09 04:15:59.377716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:30.938 [2024-12-09 04:15:59.391620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:30.938 [2024-12-09 04:15:59.391881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.938 [2024-12-09 04:15:59.391929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:30.938 [2024-12-09 04:15:59.405713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:30.938 [2024-12-09 04:15:59.406052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.938 [2024-12-09 04:15:59.406083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:30.938 [2024-12-09 04:15:59.419864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:30.938 [2024-12-09 04:15:59.420198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.938 [2024-12-09 04:15:59.420245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:30.938 [2024-12-09 04:15:59.434065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:30.938 [2024-12-09 04:15:59.434398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.938 [2024-12-09 04:15:59.434449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:30.938 [2024-12-09 04:15:59.448265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:30.938 [2024-12-09 04:15:59.448620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.938 [2024-12-09 04:15:59.448650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:30.938 [2024-12-09 04:15:59.462393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:30.938 [2024-12-09 04:15:59.462652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.938 [2024-12-09 04:15:59.462704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:30.938 [2024-12-09 04:15:59.476502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:30.938 [2024-12-09 04:15:59.476809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:4020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.938 [2024-12-09 04:15:59.476841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:30.938 [2024-12-09 04:15:59.490725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:30.938 [2024-12-09 04:15:59.490992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.938 [2024-12-09 04:15:59.491042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:30.938 [2024-12-09 04:15:59.504926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:30.938 [2024-12-09 04:15:59.505209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:30.938 [2024-12-09 04:15:59.505257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.194 [2024-12-09 04:15:59.518511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.194 [2024-12-09 04:15:59.518763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.194 [2024-12-09 04:15:59.518795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.194 [2024-12-09 04:15:59.532030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.194 [2024-12-09 04:15:59.532285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.194 [2024-12-09 04:15:59.532332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.194 [2024-12-09 04:15:59.546165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.194 [2024-12-09 04:15:59.546438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.194 [2024-12-09 04:15:59.546487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.194 [2024-12-09 04:15:59.560383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.194 [2024-12-09 04:15:59.560648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.194 [2024-12-09 04:15:59.560694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.194 [2024-12-09 04:15:59.574557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.194 [2024-12-09 04:15:59.574893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.194 [2024-12-09 04:15:59.574939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.194 [2024-12-09 04:15:59.588756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.194 [2024-12-09 04:15:59.589015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.194 [2024-12-09 04:15:59.589060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.194 [2024-12-09 04:15:59.603038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.194 [2024-12-09 04:15:59.603389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.194 [2024-12-09 04:15:59.603420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.194 [2024-12-09 04:15:59.617320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.194 [2024-12-09 04:15:59.617642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.194 [2024-12-09 04:15:59.617673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.194 [2024-12-09 04:15:59.631570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.194 [2024-12-09 04:15:59.631907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.194 [2024-12-09 04:15:59.631939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.194 [2024-12-09 04:15:59.645766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.194 [2024-12-09 04:15:59.646041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.194 [2024-12-09 04:15:59.646087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.194 [2024-12-09 04:15:59.659926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.194 [2024-12-09 04:15:59.660238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.194 [2024-12-09 04:15:59.660268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.194 [2024-12-09 04:15:59.674024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.194 [2024-12-09 04:15:59.674348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.194 [2024-12-09 04:15:59.674379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.194 [2024-12-09 04:15:59.688228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.194 [2024-12-09 04:15:59.688503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.194 [2024-12-09 04:15:59.688550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.194 [2024-12-09 04:15:59.702499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.194 [2024-12-09 04:15:59.702785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.194 [2024-12-09 04:15:59.702830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.194 [2024-12-09 04:15:59.716628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.194 [2024-12-09 04:15:59.716996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.195 [2024-12-09 04:15:59.717025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.195 [2024-12-09 04:15:59.730884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.195 [2024-12-09 04:15:59.731233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.195 [2024-12-09 04:15:59.731265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.195 [2024-12-09 04:15:59.745041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.195 [2024-12-09 04:15:59.745309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.195 [2024-12-09 04:15:59.745354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.195 [2024-12-09 04:15:59.759205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.195 [2024-12-09 04:15:59.759580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.195 [2024-12-09 04:15:59.759611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.450 [2024-12-09 04:15:59.772981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.450 [2024-12-09 04:15:59.773313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.450 [2024-12-09 04:15:59.773341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.450 [2024-12-09 04:15:59.786855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.450 [2024-12-09 04:15:59.787115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.450 [2024-12-09 04:15:59.787161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.450 [2024-12-09 04:15:59.800936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.450 [2024-12-09 04:15:59.801204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.450 [2024-12-09 04:15:59.801247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.450 [2024-12-09 04:15:59.815046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.450 [2024-12-09 04:15:59.815335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.450 [2024-12-09 04:15:59.815379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.450 [2024-12-09 04:15:59.829172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.450 [2024-12-09 04:15:59.829423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.450 [2024-12-09 04:15:59.829467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.450 [2024-12-09 04:15:59.843123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.450 [2024-12-09 04:15:59.843404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.450 [2024-12-09 04:15:59.843443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.450 [2024-12-09 04:15:59.857108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.450 [2024-12-09 04:15:59.857467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.450 [2024-12-09 04:15:59.857501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.450 [2024-12-09 04:15:59.871352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.450 [2024-12-09 04:15:59.871621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.450 [2024-12-09 04:15:59.871668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.450 [2024-12-09 04:15:59.885431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.450 [2024-12-09 04:15:59.885770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.450 [2024-12-09 04:15:59.885800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.450 [2024-12-09 04:15:59.899704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.450 [2024-12-09 04:15:59.899989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:18999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.450 [2024-12-09 04:15:59.900035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.450 [2024-12-09 04:15:59.913937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.450 [2024-12-09 04:15:59.914221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.450 [2024-12-09 04:15:59.914267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.450 [2024-12-09 04:15:59.928016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.450 [2024-12-09 04:15:59.928360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.450 [2024-12-09 04:15:59.928389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.450 [2024-12-09 04:15:59.942244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.451 [2024-12-09 04:15:59.942572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.451 [2024-12-09 04:15:59.942619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.451 [2024-12-09 04:15:59.956437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.451 [2024-12-09 04:15:59.956695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.451 [2024-12-09 04:15:59.956743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.451 [2024-12-09 04:15:59.970627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.451 [2024-12-09 04:15:59.970910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.451 [2024-12-09 04:15:59.970956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.451 [2024-12-09 04:15:59.984812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.451 [2024-12-09 04:15:59.985114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.451 [2024-12-09 04:15:59.985166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.451 [2024-12-09 04:15:59.998928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.451 [2024-12-09 04:15:59.999213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.451 [2024-12-09 04:15:59.999260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.451 [2024-12-09 04:16:00.012948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.451 [2024-12-09 04:16:00.013190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.451 [2024-12-09 04:16:00.013230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.708 [2024-12-09 04:16:00.026548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.708 [2024-12-09 04:16:00.026755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.708 [2024-12-09 04:16:00.026786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.708 [2024-12-09 04:16:00.040092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.708 [2024-12-09 04:16:00.040314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.708 [2024-12-09 04:16:00.040344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.708 18549.00 IOPS, 72.46 MiB/s [2024-12-09T03:16:00.284Z] [2024-12-09 04:16:00.054411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.708 [2024-12-09 04:16:00.054857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.708 [2024-12-09 04:16:00.054886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.708 [2024-12-09 04:16:00.069518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.708 [2024-12-09 04:16:00.069744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.708 [2024-12-09 04:16:00.069789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.708 [2024-12-09 04:16:00.083205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.708 [2024-12-09 04:16:00.083452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.708 [2024-12-09 04:16:00.083483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.708 [2024-12-09 04:16:00.097179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.708 [2024-12-09 04:16:00.097436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.708 [2024-12-09 04:16:00.097480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.708 [2024-12-09 04:16:00.111277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.708 [2024-12-09 04:16:00.111537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.708 [2024-12-09 04:16:00.111566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.708 [2024-12-09 04:16:00.125451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.708 [2024-12-09 04:16:00.125682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.708 [2024-12-09 04:16:00.125724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.708 [2024-12-09 04:16:00.139232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.708 [2024-12-09 04:16:00.139461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.708 [2024-12-09 04:16:00.139491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.708 [2024-12-09 04:16:00.153056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.708 [2024-12-09 04:16:00.153283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.708 [2024-12-09 04:16:00.153330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.708 [2024-12-09 04:16:00.166888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.708 [2024-12-09 04:16:00.167132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.708 [2024-12-09 04:16:00.167161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.708 [2024-12-09 04:16:00.180391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.708 [2024-12-09 04:16:00.180623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.708 [2024-12-09 04:16:00.180657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.708 [2024-12-09 04:16:00.193848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.708 [2024-12-09 04:16:00.194089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.708 [2024-12-09 04:16:00.194117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.708 [2024-12-09 04:16:00.207731] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.708 [2024-12-09 04:16:00.207953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.708 [2024-12-09 04:16:00.208001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.708 [2024-12-09 04:16:00.221559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.708 [2024-12-09 04:16:00.221816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.708 [2024-12-09 04:16:00.221850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.708 [2024-12-09 04:16:00.235405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.708 [2024-12-09 04:16:00.235657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.708 [2024-12-09 04:16:00.235685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.708 [2024-12-09 04:16:00.249618] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.708 [2024-12-09 04:16:00.249849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.708 [2024-12-09 04:16:00.249877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.708 [2024-12-09 04:16:00.263481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.708 [2024-12-09 04:16:00.263709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.708 [2024-12-09 04:16:00.263752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.708 [2024-12-09 04:16:00.277544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.708 [2024-12-09 04:16:00.277808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.708 [2024-12-09 04:16:00.277836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.966 [2024-12-09 04:16:00.290705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.966 [2024-12-09 04:16:00.290899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-12-09 04:16:00.290928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.966 [2024-12-09 04:16:00.303934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.966 [2024-12-09 04:16:00.304171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-12-09 04:16:00.304198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.966 [2024-12-09 04:16:00.317873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.966 [2024-12-09 04:16:00.318099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-12-09 04:16:00.318142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.966 [2024-12-09 04:16:00.331799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.966 [2024-12-09 04:16:00.332024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-12-09 04:16:00.332070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.966 [2024-12-09 04:16:00.345650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.966 [2024-12-09 04:16:00.345892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-12-09 04:16:00.345925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.966 [2024-12-09 04:16:00.359561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.966 [2024-12-09 04:16:00.359780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-12-09 04:16:00.359822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.966 [2024-12-09 04:16:00.373366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.966 [2024-12-09 04:16:00.373593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-12-09 04:16:00.373620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.966 [2024-12-09 04:16:00.387201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.966 [2024-12-09 04:16:00.387473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-12-09 04:16:00.387503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.966 [2024-12-09 04:16:00.400861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.966 [2024-12-09 04:16:00.401089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-12-09 04:16:00.401133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.966 [2024-12-09 04:16:00.414778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.966 [2024-12-09 04:16:00.415004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-12-09 04:16:00.415045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.966 [2024-12-09 04:16:00.428735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.966 [2024-12-09 04:16:00.428952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-12-09 04:16:00.428981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.966 [2024-12-09 04:16:00.442545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.966 [2024-12-09 04:16:00.442785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-12-09 04:16:00.442826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.966 [2024-12-09 04:16:00.456335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.966 [2024-12-09 04:16:00.456536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-12-09 04:16:00.456565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.966 [2024-12-09 04:16:00.470139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.966 [2024-12-09 04:16:00.470393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-12-09 04:16:00.470422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.966 [2024-12-09 04:16:00.483822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.966 [2024-12-09 04:16:00.484023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-12-09 04:16:00.484052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.966 [2024-12-09 04:16:00.497594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.966 [2024-12-09 04:16:00.497830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-12-09 04:16:00.497858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.966 [2024-12-09 04:16:00.511433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.966 [2024-12-09 04:16:00.511648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-12-09 04:16:00.511675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.966 [2024-12-09 04:16:00.525209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.966 [2024-12-09 04:16:00.525422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:22668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-12-09 04:16:00.525451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:31.966 [2024-12-09 04:16:00.538938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:31.966 [2024-12-09 04:16:00.539174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:31.966 [2024-12-09 04:16:00.539201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.224 [2024-12-09 04:16:00.552235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.224 [2024-12-09 04:16:00.552473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-12-09 04:16:00.552501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.224 [2024-12-09 04:16:00.565735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.224 [2024-12-09 04:16:00.565936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-12-09 04:16:00.565965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.224 [2024-12-09 04:16:00.579605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.224 [2024-12-09 04:16:00.579827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-12-09 04:16:00.579856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.224 [2024-12-09 04:16:00.593522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.224 [2024-12-09 04:16:00.593763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-12-09 04:16:00.593790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.224 [2024-12-09 04:16:00.607637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.224 [2024-12-09 04:16:00.607861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-12-09 04:16:00.607902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.224 [2024-12-09 04:16:00.621551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.224 [2024-12-09 04:16:00.621814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-12-09 04:16:00.621841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.224 [2024-12-09 04:16:00.635705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.224 [2024-12-09 04:16:00.635941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:10524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-12-09 04:16:00.635968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.224 [2024-12-09 04:16:00.649788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.224 [2024-12-09 04:16:00.650014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-12-09 04:16:00.650042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.224 [2024-12-09 04:16:00.663917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.224 [2024-12-09 04:16:00.664137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-12-09 04:16:00.664178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.224 [2024-12-09 04:16:00.678140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.224 [2024-12-09 04:16:00.678388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.224 [2024-12-09 04:16:00.678430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.224 [2024-12-09 04:16:00.692399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.224 [2024-12-09 04:16:00.692622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.225 [2024-12-09 04:16:00.692650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.225 [2024-12-09 04:16:00.706235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.225 [2024-12-09 04:16:00.706503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.225 [2024-12-09 04:16:00.706536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.225 [2024-12-09 04:16:00.720396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.225 [2024-12-09 04:16:00.720625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.225 [2024-12-09 04:16:00.720651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.225 [2024-12-09 04:16:00.734458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.225 [2024-12-09 04:16:00.734699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.225 [2024-12-09 04:16:00.734740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.225 [2024-12-09 04:16:00.748673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.225 [2024-12-09 04:16:00.748893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.225 [2024-12-09 04:16:00.748920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.225 [2024-12-09 04:16:00.762690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.225 [2024-12-09 04:16:00.762908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.225 [2024-12-09 04:16:00.762949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.225 [2024-12-09 04:16:00.776783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.225 [2024-12-09 04:16:00.777002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.225 [2024-12-09 04:16:00.777048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.225 [2024-12-09 04:16:00.790912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.225 [2024-12-09 04:16:00.791122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:9846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.225 [2024-12-09 04:16:00.791149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.482 [2024-12-09 04:16:00.804486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.482 [2024-12-09 04:16:00.804707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.482 [2024-12-09 04:16:00.804753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.483 [2024-12-09 04:16:00.818609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.483 [2024-12-09 04:16:00.818836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.483 [2024-12-09 04:16:00.818863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.483 [2024-12-09 04:16:00.832779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.483 [2024-12-09 04:16:00.833007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.483 [2024-12-09 04:16:00.833054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.483 [2024-12-09 04:16:00.847004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.483 [2024-12-09 04:16:00.847228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.483 [2024-12-09 04:16:00.847277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.483 [2024-12-09 04:16:00.861204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.483 [2024-12-09 04:16:00.861454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.483 [2024-12-09 04:16:00.861502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.483 [2024-12-09 04:16:00.874904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.483 [2024-12-09 04:16:00.875134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.483 [2024-12-09 04:16:00.875173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.483 [2024-12-09 04:16:00.888868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.483 [2024-12-09 04:16:00.889082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.483 [2024-12-09 04:16:00.889110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.483 [2024-12-09 04:16:00.902812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.483 [2024-12-09 04:16:00.903042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.483 [2024-12-09 04:16:00.903082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.483 [2024-12-09 04:16:00.916934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.483 [2024-12-09 04:16:00.917166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.483 [2024-12-09 04:16:00.917211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.483 [2024-12-09 04:16:00.930931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.483 [2024-12-09 04:16:00.931177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.483 [2024-12-09 04:16:00.931221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.483 [2024-12-09 04:16:00.945106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.483 [2024-12-09 04:16:00.945360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:10757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.483 [2024-12-09 04:16:00.945404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.483 [2024-12-09 04:16:00.959197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.483 [2024-12-09 04:16:00.959451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:15957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.483 [2024-12-09 04:16:00.959496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.483 [2024-12-09 04:16:00.973381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.483 [2024-12-09 04:16:00.973602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.483 [2024-12-09 04:16:00.973644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.483 [2024-12-09 04:16:00.987697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.483 [2024-12-09 04:16:00.987921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.483 [2024-12-09 04:16:00.987949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.483 [2024-12-09 04:16:01.001840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.483 [2024-12-09 04:16:01.002062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.483 [2024-12-09 04:16:01.002106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.483 [2024-12-09 04:16:01.015911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.483 [2024-12-09 04:16:01.016153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.483 [2024-12-09 04:16:01.016194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.483 [2024-12-09 04:16:01.030074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.483 [2024-12-09 04:16:01.030304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:16695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.483 [2024-12-09 04:16:01.030333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.483 [2024-12-09 04:16:01.044243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.483 [2024-12-09 04:16:01.044480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.483 [2024-12-09 04:16:01.044508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.483 18438.50 IOPS, 72.03 MiB/s [2024-12-09T03:16:01.059Z] [2024-12-09 04:16:01.058214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x20ffe30) with pdu=0x200016ef96f8 00:25:32.741 [2024-12-09 04:16:01.058420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:32.741 [2024-12-09 04:16:01.058449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:32.741 00:25:32.741 Latency(us) 00:25:32.741 [2024-12-09T03:16:01.317Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.741 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:32.741 nvme0n1 : 2.01 18442.83 72.04 0.00 0.00 6924.60 2791.35 14951.92 00:25:32.741 [2024-12-09T03:16:01.317Z] =================================================================================================================== 00:25:32.741 [2024-12-09T03:16:01.317Z] Total : 18442.83 72.04 0.00 0.00 6924.60 2791.35 14951.92 00:25:32.741 { 00:25:32.741 "results": [ 00:25:32.741 { 00:25:32.741 "job": "nvme0n1", 00:25:32.741 "core_mask": "0x2", 00:25:32.741 "workload": "randwrite", 00:25:32.741 "status": "finished", 00:25:32.741 "queue_depth": 128, 00:25:32.741 "io_size": 4096, 00:25:32.741 "runtime": 2.006471, 00:25:32.741 "iops": 18442.828229264214, 00:25:32.741 "mibps": 72.04229777056334, 00:25:32.741 "io_failed": 0, 00:25:32.741 "io_timeout": 0, 00:25:32.741 "avg_latency_us": 6924.598427880117, 00:25:32.741 "min_latency_us": 2791.348148148148, 00:25:32.741 "max_latency_us": 14951.917037037038 00:25:32.741 } 00:25:32.741 ], 00:25:32.741 "core_count": 1 00:25:32.741 } 00:25:32.741 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:32.741 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:32.741 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:32.741 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:32.741 | .driver_specific 00:25:32.741 | .nvme_error 00:25:32.741 | .status_code 00:25:32.741 | .command_transient_transport_error' 00:25:32.999 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:25:32.999 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 338184 00:25:32.999 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 338184 ']' 00:25:32.999 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 338184 00:25:32.999 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:32.999 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:32.999 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 338184 00:25:32.999 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:32.999 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:32.999 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 338184' 00:25:32.999 killing process with pid 338184 00:25:32.999 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 338184 00:25:32.999 Received shutdown signal, test time was about 2.000000 seconds 00:25:32.999 00:25:32.999 Latency(us) 00:25:32.999 [2024-12-09T03:16:01.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:32.999 [2024-12-09T03:16:01.575Z] =================================================================================================================== 00:25:32.999 [2024-12-09T03:16:01.575Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:32.999 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 338184 00:25:33.257 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:33.257 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:33.257 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:33.257 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:33.257 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:33.257 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=338599 00:25:33.257 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:33.257 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 338599 /var/tmp/bperf.sock 00:25:33.257 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 338599 ']' 00:25:33.257 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:33.257 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:33.257 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:33.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:33.257 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:33.257 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:33.257 [2024-12-09 04:16:01.660861] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:25:33.257 [2024-12-09 04:16:01.660936] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid338599 ] 00:25:33.257 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:33.257 Zero copy mechanism will not be used. 00:25:33.257 [2024-12-09 04:16:01.726668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.257 [2024-12-09 04:16:01.781598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.515 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.515 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:33.515 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:33.515 04:16:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:33.773 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:33.773 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.773 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:33.773 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.773 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:33.773 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:34.031 nvme0n1 00:25:34.031 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:34.031 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.031 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:34.031 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.031 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:34.031 04:16:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:34.290 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:34.290 Zero copy mechanism will not be used. 00:25:34.290 Running I/O for 2 seconds... 00:25:34.290 [2024-12-09 04:16:02.645498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.290 [2024-12-09 04:16:02.645617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.290 [2024-12-09 04:16:02.645669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.290 [2024-12-09 04:16:02.653145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.290 [2024-12-09 04:16:02.653267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.290 [2024-12-09 04:16:02.653307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.290 [2024-12-09 04:16:02.660763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.290 [2024-12-09 04:16:02.660909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.290 [2024-12-09 04:16:02.660955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.290 [2024-12-09 04:16:02.667470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.290 [2024-12-09 04:16:02.667616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.290 [2024-12-09 04:16:02.667644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.290 [2024-12-09 04:16:02.674215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.290 [2024-12-09 04:16:02.674351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.290 [2024-12-09 04:16:02.674381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.290 [2024-12-09 04:16:02.680182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.290 [2024-12-09 04:16:02.680296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.290 [2024-12-09 04:16:02.680336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.290 [2024-12-09 04:16:02.686060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.290 [2024-12-09 04:16:02.686203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.290 [2024-12-09 04:16:02.686247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.290 [2024-12-09 04:16:02.692114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.290 [2024-12-09 04:16:02.692221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.290 [2024-12-09 04:16:02.692251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.290 [2024-12-09 04:16:02.697964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.290 [2024-12-09 04:16:02.698103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.290 [2024-12-09 04:16:02.698133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.290 [2024-12-09 04:16:02.704240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.290 [2024-12-09 04:16:02.704400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.290 [2024-12-09 04:16:02.704429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.290 [2024-12-09 04:16:02.711031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.290 [2024-12-09 04:16:02.711153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.290 [2024-12-09 04:16:02.711183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.290 [2024-12-09 04:16:02.717524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.290 [2024-12-09 04:16:02.717664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.290 [2024-12-09 04:16:02.717694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.290 [2024-12-09 04:16:02.723626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.290 [2024-12-09 04:16:02.723830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.290 [2024-12-09 04:16:02.723859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.290 [2024-12-09 04:16:02.729961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.290 [2024-12-09 04:16:02.730127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.290 [2024-12-09 04:16:02.730156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.290 [2024-12-09 04:16:02.736284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.290 [2024-12-09 04:16:02.736446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.290 [2024-12-09 04:16:02.736476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.290 [2024-12-09 04:16:02.742721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.290 [2024-12-09 04:16:02.742906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.290 [2024-12-09 04:16:02.742949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.290 [2024-12-09 04:16:02.749798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.290 [2024-12-09 04:16:02.749895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.290 [2024-12-09 04:16:02.749950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.290 [2024-12-09 04:16:02.757522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.290 [2024-12-09 04:16:02.757698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.290 [2024-12-09 04:16:02.757726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.290 [2024-12-09 04:16:02.764829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.291 [2024-12-09 04:16:02.764945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.291 [2024-12-09 04:16:02.764976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.291 [2024-12-09 04:16:02.771307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.291 [2024-12-09 04:16:02.771419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.291 [2024-12-09 04:16:02.771448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.291 [2024-12-09 04:16:02.777429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.291 [2024-12-09 04:16:02.777574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.291 [2024-12-09 04:16:02.777604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.291 [2024-12-09 04:16:02.783481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.291 [2024-12-09 04:16:02.783634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.291 [2024-12-09 04:16:02.783681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.291 [2024-12-09 04:16:02.789619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.291 [2024-12-09 04:16:02.789812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.291 [2024-12-09 04:16:02.789841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.291 [2024-12-09 04:16:02.796416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.291 [2024-12-09 04:16:02.796625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.291 [2024-12-09 04:16:02.796656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.291 [2024-12-09 04:16:02.803113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.291 [2024-12-09 04:16:02.803348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.291 [2024-12-09 04:16:02.803378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.291 [2024-12-09 04:16:02.810593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.291 [2024-12-09 04:16:02.810721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.291 [2024-12-09 04:16:02.810751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.291 [2024-12-09 04:16:02.817874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.291 [2024-12-09 04:16:02.817963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.291 [2024-12-09 04:16:02.817990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.291 [2024-12-09 04:16:02.824444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.291 [2024-12-09 04:16:02.824569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.291 [2024-12-09 04:16:02.824598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.291 [2024-12-09 04:16:02.830383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.291 [2024-12-09 04:16:02.830503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.291 [2024-12-09 04:16:02.830533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.291 [2024-12-09 04:16:02.836416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.291 [2024-12-09 04:16:02.836537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.291 [2024-12-09 04:16:02.836565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.291 [2024-12-09 04:16:02.842328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.291 [2024-12-09 04:16:02.842475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.291 [2024-12-09 04:16:02.842505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.291 [2024-12-09 04:16:02.849102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.291 [2024-12-09 04:16:02.849326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.291 [2024-12-09 04:16:02.849356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.291 [2024-12-09 04:16:02.856564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.291 [2024-12-09 04:16:02.856707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.291 [2024-12-09 04:16:02.856737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.291 [2024-12-09 04:16:02.862594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.291 [2024-12-09 04:16:02.862714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.291 [2024-12-09 04:16:02.862744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.550 [2024-12-09 04:16:02.868597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.550 [2024-12-09 04:16:02.868751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.550 [2024-12-09 04:16:02.868781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.550 [2024-12-09 04:16:02.874622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.550 [2024-12-09 04:16:02.874764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.550 [2024-12-09 04:16:02.874793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.550 [2024-12-09 04:16:02.880810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.550 [2024-12-09 04:16:02.880936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.550 [2024-12-09 04:16:02.880981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.550 [2024-12-09 04:16:02.887781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.550 [2024-12-09 04:16:02.887990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.550 [2024-12-09 04:16:02.888018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.550 [2024-12-09 04:16:02.895106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.550 [2024-12-09 04:16:02.895263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.550 [2024-12-09 04:16:02.895326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.550 [2024-12-09 04:16:02.902232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.550 [2024-12-09 04:16:02.902375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.550 [2024-12-09 04:16:02.902423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.550 [2024-12-09 04:16:02.909347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.550 [2024-12-09 04:16:02.909439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.550 [2024-12-09 04:16:02.909468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.550 [2024-12-09 04:16:02.916418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.550 [2024-12-09 04:16:02.916510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.550 [2024-12-09 04:16:02.916539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.550 [2024-12-09 04:16:02.923317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.550 [2024-12-09 04:16:02.923428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.550 [2024-12-09 04:16:02.923469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.550 [2024-12-09 04:16:02.930253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.550 [2024-12-09 04:16:02.930592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.550 [2024-12-09 04:16:02.930619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.550 [2024-12-09 04:16:02.937893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.550 [2024-12-09 04:16:02.938018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.550 [2024-12-09 04:16:02.938046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.550 [2024-12-09 04:16:02.945412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.550 [2024-12-09 04:16:02.945598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.550 [2024-12-09 04:16:02.945641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.550 [2024-12-09 04:16:02.952247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.550 [2024-12-09 04:16:02.952410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.550 [2024-12-09 04:16:02.952439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.550 [2024-12-09 04:16:02.958166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.550 [2024-12-09 04:16:02.958293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.550 [2024-12-09 04:16:02.958322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.550 [2024-12-09 04:16:02.964081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.550 [2024-12-09 04:16:02.964215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.550 [2024-12-09 04:16:02.964243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.550 [2024-12-09 04:16:02.970131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.550 [2024-12-09 04:16:02.970242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.550 [2024-12-09 04:16:02.970297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.550 [2024-12-09 04:16:02.976042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.550 [2024-12-09 04:16:02.976186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.550 [2024-12-09 04:16:02.976214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.550 [2024-12-09 04:16:02.982198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.550 [2024-12-09 04:16:02.982401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.550 [2024-12-09 04:16:02.982432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.550 [2024-12-09 04:16:02.988628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.550 [2024-12-09 04:16:02.988803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.550 [2024-12-09 04:16:02.988833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.550 [2024-12-09 04:16:02.995077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.550 [2024-12-09 04:16:02.995217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.551 [2024-12-09 04:16:02.995245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.551 [2024-12-09 04:16:03.001460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.551 [2024-12-09 04:16:03.001637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.551 [2024-12-09 04:16:03.001665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.551 [2024-12-09 04:16:03.007965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.551 [2024-12-09 04:16:03.008136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.551 [2024-12-09 04:16:03.008164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.551 [2024-12-09 04:16:03.014443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.551 [2024-12-09 04:16:03.014610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.551 [2024-12-09 04:16:03.014639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.551 [2024-12-09 04:16:03.020821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.551 [2024-12-09 04:16:03.021004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.551 [2024-12-09 04:16:03.021046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.551 [2024-12-09 04:16:03.027386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.551 [2024-12-09 04:16:03.027512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.551 [2024-12-09 04:16:03.027541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.551 [2024-12-09 04:16:03.033747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.551 [2024-12-09 04:16:03.033918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.551 [2024-12-09 04:16:03.033945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.551 [2024-12-09 04:16:03.040200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.551 [2024-12-09 04:16:03.040314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.551 [2024-12-09 04:16:03.040351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.551 [2024-12-09 04:16:03.046027] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.551 [2024-12-09 04:16:03.046202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.551 [2024-12-09 04:16:03.046231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.551 [2024-12-09 04:16:03.052348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.551 [2024-12-09 04:16:03.052489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.551 [2024-12-09 04:16:03.052518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.551 [2024-12-09 04:16:03.059018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.551 [2024-12-09 04:16:03.059142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.551 [2024-12-09 04:16:03.059171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.551 [2024-12-09 04:16:03.065296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.551 [2024-12-09 04:16:03.065404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.551 [2024-12-09 04:16:03.065439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.551 [2024-12-09 04:16:03.071150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.551 [2024-12-09 04:16:03.071328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.551 [2024-12-09 04:16:03.071359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.551 [2024-12-09 04:16:03.077070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.551 [2024-12-09 04:16:03.077184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.551 [2024-12-09 04:16:03.077213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.551 [2024-12-09 04:16:03.083781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.551 [2024-12-09 04:16:03.083858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.551 [2024-12-09 04:16:03.083886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.551 [2024-12-09 04:16:03.090224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.551 [2024-12-09 04:16:03.090321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.551 [2024-12-09 04:16:03.090369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.551 [2024-12-09 04:16:03.096184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.551 [2024-12-09 04:16:03.096319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.551 [2024-12-09 04:16:03.096348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.551 [2024-12-09 04:16:03.101975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.551 [2024-12-09 04:16:03.102080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.551 [2024-12-09 04:16:03.102106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.551 [2024-12-09 04:16:03.108057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.551 [2024-12-09 04:16:03.108148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.551 [2024-12-09 04:16:03.108176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.551 [2024-12-09 04:16:03.114327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.551 [2024-12-09 04:16:03.114427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.551 [2024-12-09 04:16:03.114461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.551 [2024-12-09 04:16:03.120116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.551 [2024-12-09 04:16:03.120216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.551 [2024-12-09 04:16:03.120245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.810 [2024-12-09 04:16:03.126018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.810 [2024-12-09 04:16:03.126125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.810 [2024-12-09 04:16:03.126177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.810 [2024-12-09 04:16:03.132457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.810 [2024-12-09 04:16:03.132549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.810 [2024-12-09 04:16:03.132578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.810 [2024-12-09 04:16:03.138863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.810 [2024-12-09 04:16:03.138940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.810 [2024-12-09 04:16:03.138967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.810 [2024-12-09 04:16:03.145286] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.810 [2024-12-09 04:16:03.145377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.810 [2024-12-09 04:16:03.145405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.810 [2024-12-09 04:16:03.151429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.810 [2024-12-09 04:16:03.151690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.810 [2024-12-09 04:16:03.151719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.810 [2024-12-09 04:16:03.157537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.810 [2024-12-09 04:16:03.157908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.810 [2024-12-09 04:16:03.157937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.810 [2024-12-09 04:16:03.163915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.810 [2024-12-09 04:16:03.164238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.810 [2024-12-09 04:16:03.164294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.810 [2024-12-09 04:16:03.169569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.810 [2024-12-09 04:16:03.169905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.810 [2024-12-09 04:16:03.169934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.810 [2024-12-09 04:16:03.175122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.810 [2024-12-09 04:16:03.175452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.810 [2024-12-09 04:16:03.175482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.810 [2024-12-09 04:16:03.180789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.810 [2024-12-09 04:16:03.181123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.810 [2024-12-09 04:16:03.181157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.810 [2024-12-09 04:16:03.186439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.810 [2024-12-09 04:16:03.186757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.810 [2024-12-09 04:16:03.186785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.810 [2024-12-09 04:16:03.192096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.810 [2024-12-09 04:16:03.192438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.810 [2024-12-09 04:16:03.192469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.810 [2024-12-09 04:16:03.197846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.810 [2024-12-09 04:16:03.198159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.810 [2024-12-09 04:16:03.198187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.810 [2024-12-09 04:16:03.203419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.810 [2024-12-09 04:16:03.203762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.810 [2024-12-09 04:16:03.203791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.810 [2024-12-09 04:16:03.209124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.810 [2024-12-09 04:16:03.209465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.810 [2024-12-09 04:16:03.209495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.810 [2024-12-09 04:16:03.214722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.810 [2024-12-09 04:16:03.215018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.215046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.220377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.220741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.220771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.226831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.227118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.227146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.232975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.233327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.233357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.239285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.239633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.239676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.245464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.245784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.245820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.251779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.252082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.252124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.258043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.258373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.258403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.264225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.264554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.264603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.270025] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.270393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.270422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.275932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.276291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.276334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.282373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.282698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.282727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.288491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.288795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.288823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.294626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.294946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.294989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.300841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.301128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.301178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.306917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.307214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.307243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.313089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.313406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.313435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.319284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.319594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.319623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.325429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.325753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.325781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.331493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.331814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.331842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.337748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.338053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.338081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.343747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.344067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.344095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.350011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.350317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.350346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.356638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.357031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.357058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.363774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.364169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.364198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.370236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.370549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.370593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.376257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.376581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.376624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:34.811 [2024-12-09 04:16:03.382446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:34.811 [2024-12-09 04:16:03.382840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.811 [2024-12-09 04:16:03.382869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.070 [2024-12-09 04:16:03.388541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.070 [2024-12-09 04:16:03.388830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.070 [2024-12-09 04:16:03.388874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.070 [2024-12-09 04:16:03.394572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.070 [2024-12-09 04:16:03.394891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.070 [2024-12-09 04:16:03.394920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.070 [2024-12-09 04:16:03.400236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.070 [2024-12-09 04:16:03.400559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.070 [2024-12-09 04:16:03.400590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.070 [2024-12-09 04:16:03.406438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.070 [2024-12-09 04:16:03.406845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.070 [2024-12-09 04:16:03.406882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.070 [2024-12-09 04:16:03.413406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.070 [2024-12-09 04:16:03.413770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.070 [2024-12-09 04:16:03.413800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.070 [2024-12-09 04:16:03.420470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.070 [2024-12-09 04:16:03.420804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.070 [2024-12-09 04:16:03.420838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.070 [2024-12-09 04:16:03.427615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.070 [2024-12-09 04:16:03.427972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.070 [2024-12-09 04:16:03.428001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.070 [2024-12-09 04:16:03.435443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.070 [2024-12-09 04:16:03.435768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.070 [2024-12-09 04:16:03.435796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.070 [2024-12-09 04:16:03.442889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.070 [2024-12-09 04:16:03.443300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.070 [2024-12-09 04:16:03.443346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.070 [2024-12-09 04:16:03.448862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.070 [2024-12-09 04:16:03.449158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.070 [2024-12-09 04:16:03.449202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.070 [2024-12-09 04:16:03.455003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.070 [2024-12-09 04:16:03.455324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.070 [2024-12-09 04:16:03.455368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.070 [2024-12-09 04:16:03.461295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.461604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.461633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.467792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.468103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.468154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.474285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.474685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.474713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.480694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.481076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.481103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.487009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.487377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.487407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.493464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.493792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.493820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.499550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.499942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.499970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.506122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.506492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.506521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.512862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.513146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.513175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.519773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.520142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.520170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.526522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.526794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.526826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.533603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.533918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.533945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.540590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.540851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.540896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.547464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.547771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.547799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.554226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.554564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.554608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.561034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.561415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.561445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.568194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.568552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.568596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.575116] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.575425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.575455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.581957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.582224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.582282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.588728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.588989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.589017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.595553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.595900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.595928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.602445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.602711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.602739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.071 [2024-12-09 04:16:03.609104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.071 [2024-12-09 04:16:03.609498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.071 [2024-12-09 04:16:03.609527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.072 [2024-12-09 04:16:03.616105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.072 [2024-12-09 04:16:03.616482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.072 [2024-12-09 04:16:03.616512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.072 [2024-12-09 04:16:03.623222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.072 [2024-12-09 04:16:03.623495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.072 [2024-12-09 04:16:03.623525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.072 [2024-12-09 04:16:03.630137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.072 [2024-12-09 04:16:03.630484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.072 [2024-12-09 04:16:03.630513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.072 [2024-12-09 04:16:03.637087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.072 [2024-12-09 04:16:03.637463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.072 [2024-12-09 04:16:03.637493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.330 4785.00 IOPS, 598.12 MiB/s [2024-12-09T03:16:03.906Z] [2024-12-09 04:16:03.645424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.330 [2024-12-09 04:16:03.645778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.330 [2024-12-09 04:16:03.645808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.330 [2024-12-09 04:16:03.650748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.330 [2024-12-09 04:16:03.650983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.330 [2024-12-09 04:16:03.651010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.330 [2024-12-09 04:16:03.655868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.330 [2024-12-09 04:16:03.656111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.330 [2024-12-09 04:16:03.656140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.330 [2024-12-09 04:16:03.660946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.330 [2024-12-09 04:16:03.661180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.330 [2024-12-09 04:16:03.661208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.330 [2024-12-09 04:16:03.666036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.330 [2024-12-09 04:16:03.666309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.330 [2024-12-09 04:16:03.666338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.330 [2024-12-09 04:16:03.671203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.330 [2024-12-09 04:16:03.671490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.330 [2024-12-09 04:16:03.671524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.330 [2024-12-09 04:16:03.676708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.330 [2024-12-09 04:16:03.676951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.330 [2024-12-09 04:16:03.676984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.330 [2024-12-09 04:16:03.682457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.330 [2024-12-09 04:16:03.682723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.682752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.688212] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.688461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.688490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.693869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.694103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.694136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.699781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.700012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.700041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.705476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.705698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.705727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.711330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.711564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.711594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.716952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.717167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.717195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.722418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.722622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.722655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.727897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.728098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.728126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.733482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.733688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.733738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.739230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.739533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.739587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.745406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.745664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.745691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.751907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.752092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.752119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.758131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.758472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.758501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.764682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.764936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.764964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.771429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.771726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.771754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.776748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.777018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.777046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.782344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.782643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.782676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.787957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.788198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.788227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.793601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.793937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.793967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.799172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.799510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.799544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.804809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.805099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.805127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.810309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.810574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.810617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.815822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.816163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.816191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.821408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.821777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.821805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.827368] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.827611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.827638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.331 [2024-12-09 04:16:03.832824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.331 [2024-12-09 04:16:03.833206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.331 [2024-12-09 04:16:03.833233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.332 [2024-12-09 04:16:03.838617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.332 [2024-12-09 04:16:03.838891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.332 [2024-12-09 04:16:03.838922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.332 [2024-12-09 04:16:03.844207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.332 [2024-12-09 04:16:03.844492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.332 [2024-12-09 04:16:03.844520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.332 [2024-12-09 04:16:03.849296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.332 [2024-12-09 04:16:03.849547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.332 [2024-12-09 04:16:03.849593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.332 [2024-12-09 04:16:03.854969] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.332 [2024-12-09 04:16:03.855246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.332 [2024-12-09 04:16:03.855301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.332 [2024-12-09 04:16:03.860503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.332 [2024-12-09 04:16:03.860728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.332 [2024-12-09 04:16:03.860755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.332 [2024-12-09 04:16:03.866225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.332 [2024-12-09 04:16:03.866496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.332 [2024-12-09 04:16:03.866525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.332 [2024-12-09 04:16:03.871719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.332 [2024-12-09 04:16:03.872005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.332 [2024-12-09 04:16:03.872032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.332 [2024-12-09 04:16:03.877490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.332 [2024-12-09 04:16:03.877764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.332 [2024-12-09 04:16:03.877792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.332 [2024-12-09 04:16:03.882909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.332 [2024-12-09 04:16:03.883207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.332 [2024-12-09 04:16:03.883235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.332 [2024-12-09 04:16:03.888407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.332 [2024-12-09 04:16:03.888628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.332 [2024-12-09 04:16:03.888663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.332 [2024-12-09 04:16:03.894063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.332 [2024-12-09 04:16:03.894403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.332 [2024-12-09 04:16:03.894432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.332 [2024-12-09 04:16:03.899682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.332 [2024-12-09 04:16:03.899853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.332 [2024-12-09 04:16:03.899881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.591 [2024-12-09 04:16:03.905303] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.591 [2024-12-09 04:16:03.905549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.591 [2024-12-09 04:16:03.905578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.591 [2024-12-09 04:16:03.910818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.591 [2024-12-09 04:16:03.911037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.591 [2024-12-09 04:16:03.911067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.591 [2024-12-09 04:16:03.916552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.591 [2024-12-09 04:16:03.916790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.591 [2024-12-09 04:16:03.916819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.591 [2024-12-09 04:16:03.922002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.591 [2024-12-09 04:16:03.922225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.591 [2024-12-09 04:16:03.922258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.591 [2024-12-09 04:16:03.927765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.591 [2024-12-09 04:16:03.927990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.591 [2024-12-09 04:16:03.928017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.591 [2024-12-09 04:16:03.933617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.591 [2024-12-09 04:16:03.933828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.591 [2024-12-09 04:16:03.933857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.591 [2024-12-09 04:16:03.938924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.591 [2024-12-09 04:16:03.939168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.591 [2024-12-09 04:16:03.939196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.591 [2024-12-09 04:16:03.944045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.591 [2024-12-09 04:16:03.944269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.591 [2024-12-09 04:16:03.944308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.591 [2024-12-09 04:16:03.949467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.591 [2024-12-09 04:16:03.949703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.591 [2024-12-09 04:16:03.949731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.591 [2024-12-09 04:16:03.956258] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.591 [2024-12-09 04:16:03.956603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.591 [2024-12-09 04:16:03.956632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.591 [2024-12-09 04:16:03.961845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.591 [2024-12-09 04:16:03.962062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.591 [2024-12-09 04:16:03.962092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.591 [2024-12-09 04:16:03.967347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.591 [2024-12-09 04:16:03.967713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.591 [2024-12-09 04:16:03.967742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.591 [2024-12-09 04:16:03.973141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.591 [2024-12-09 04:16:03.973379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.591 [2024-12-09 04:16:03.973409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.591 [2024-12-09 04:16:03.978792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.591 [2024-12-09 04:16:03.979127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.591 [2024-12-09 04:16:03.979156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.591 [2024-12-09 04:16:03.984521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.591 [2024-12-09 04:16:03.984833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.591 [2024-12-09 04:16:03.984862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.591 [2024-12-09 04:16:03.990058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.591 [2024-12-09 04:16:03.990428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.592 [2024-12-09 04:16:03.990457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.592 [2024-12-09 04:16:03.995977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.592 [2024-12-09 04:16:03.996250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.592 [2024-12-09 04:16:03.996293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.592 [2024-12-09 04:16:04.001584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.592 [2024-12-09 04:16:04.001863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.592 [2024-12-09 04:16:04.001891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.592 [2024-12-09 04:16:04.007096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.592 [2024-12-09 04:16:04.007493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.592 [2024-12-09 04:16:04.007523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.592 [2024-12-09 04:16:04.012893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.592 [2024-12-09 04:16:04.013183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.592 [2024-12-09 04:16:04.013212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.592 [2024-12-09 04:16:04.018510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.592 [2024-12-09 04:16:04.018753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.592 [2024-12-09 04:16:04.018787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.592 [2024-12-09 04:16:04.024980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.592 [2024-12-09 04:16:04.025192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.592 [2024-12-09 04:16:04.025220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.592 [2024-12-09 04:16:04.030155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.592 [2024-12-09 04:16:04.030397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.592 [2024-12-09 04:16:04.030425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.592 [2024-12-09 04:16:04.035544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.592 [2024-12-09 04:16:04.035853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.592 [2024-12-09 04:16:04.035889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.592 [2024-12-09 04:16:04.041120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.592 [2024-12-09 04:16:04.041468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.592 [2024-12-09 04:16:04.041496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.592 [2024-12-09 04:16:04.046728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.592 [2024-12-09 04:16:04.046946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.592 [2024-12-09 04:16:04.046974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.592 [2024-12-09 04:16:04.052315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.592 [2024-12-09 04:16:04.052532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.592 [2024-12-09 04:16:04.052560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.592 [2024-12-09 04:16:04.058040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.592 [2024-12-09 04:16:04.058376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.592 [2024-12-09 04:16:04.058405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.592 [2024-12-09 04:16:04.063744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.592 [2024-12-09 04:16:04.063940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.592 [2024-12-09 04:16:04.063968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.592 [2024-12-09 04:16:04.068816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.592 [2024-12-09 04:16:04.069036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.592 [2024-12-09 04:16:04.069065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.592 [2024-12-09 04:16:04.073912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.592 [2024-12-09 04:16:04.074124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.592 [2024-12-09 04:16:04.074153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.592 [2024-12-09 04:16:04.079026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.592 [2024-12-09 04:16:04.079288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.592 [2024-12-09 04:16:04.079317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.592 [2024-12-09 04:16:04.084115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.592 [2024-12-09 04:16:04.084372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.592 [2024-12-09 04:16:04.084402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.592 [2024-12-09 04:16:04.089136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.592 [2024-12-09 04:16:04.089387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.592 [2024-12-09 04:16:04.089418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.592 [2024-12-09 04:16:04.094186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.592 [2024-12-09 04:16:04.094414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.592 [2024-12-09 04:16:04.094444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.592 [2024-12-09 04:16:04.099982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.592 [2024-12-09 04:16:04.100202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.593 [2024-12-09 04:16:04.100229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.593 [2024-12-09 04:16:04.105625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.593 [2024-12-09 04:16:04.105839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.593 [2024-12-09 04:16:04.105868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.593 [2024-12-09 04:16:04.111013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.593 [2024-12-09 04:16:04.111240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.593 [2024-12-09 04:16:04.111269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.593 [2024-12-09 04:16:04.116505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.593 [2024-12-09 04:16:04.116748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.593 [2024-12-09 04:16:04.116777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.593 [2024-12-09 04:16:04.122092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.593 [2024-12-09 04:16:04.122294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.593 [2024-12-09 04:16:04.122323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.593 [2024-12-09 04:16:04.127574] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.593 [2024-12-09 04:16:04.127781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.593 [2024-12-09 04:16:04.127809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.593 [2024-12-09 04:16:04.133213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.593 [2024-12-09 04:16:04.133428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.593 [2024-12-09 04:16:04.133457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.593 [2024-12-09 04:16:04.139481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.593 [2024-12-09 04:16:04.139751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.593 [2024-12-09 04:16:04.139779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.593 [2024-12-09 04:16:04.146120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.593 [2024-12-09 04:16:04.146347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.593 [2024-12-09 04:16:04.146377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.593 [2024-12-09 04:16:04.151315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.593 [2024-12-09 04:16:04.151518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.593 [2024-12-09 04:16:04.151553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.593 [2024-12-09 04:16:04.156413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.593 [2024-12-09 04:16:04.156631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.593 [2024-12-09 04:16:04.156659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.593 [2024-12-09 04:16:04.161510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.593 [2024-12-09 04:16:04.161732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.593 [2024-12-09 04:16:04.161761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.852 [2024-12-09 04:16:04.166538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.852 [2024-12-09 04:16:04.166752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.852 [2024-12-09 04:16:04.166801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.852 [2024-12-09 04:16:04.171737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.852 [2024-12-09 04:16:04.171929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.852 [2024-12-09 04:16:04.171963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.852 [2024-12-09 04:16:04.176861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.852 [2024-12-09 04:16:04.177081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.852 [2024-12-09 04:16:04.177122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.852 [2024-12-09 04:16:04.182001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.852 [2024-12-09 04:16:04.182221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.852 [2024-12-09 04:16:04.182250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.852 [2024-12-09 04:16:04.187096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.852 [2024-12-09 04:16:04.187324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.852 [2024-12-09 04:16:04.187353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.852 [2024-12-09 04:16:04.192523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.852 [2024-12-09 04:16:04.192876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.852 [2024-12-09 04:16:04.192909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.852 [2024-12-09 04:16:04.198042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.852 [2024-12-09 04:16:04.198346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.852 [2024-12-09 04:16:04.198376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.852 [2024-12-09 04:16:04.203904] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.852 [2024-12-09 04:16:04.204130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.852 [2024-12-09 04:16:04.204158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.852 [2024-12-09 04:16:04.210300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.852 [2024-12-09 04:16:04.210518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.852 [2024-12-09 04:16:04.210548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.852 [2024-12-09 04:16:04.215892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.852 [2024-12-09 04:16:04.216261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.852 [2024-12-09 04:16:04.216298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.852 [2024-12-09 04:16:04.221480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.852 [2024-12-09 04:16:04.221724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.852 [2024-12-09 04:16:04.221752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.852 [2024-12-09 04:16:04.226954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.852 [2024-12-09 04:16:04.227168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.852 [2024-12-09 04:16:04.227202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.852 [2024-12-09 04:16:04.232651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.852 [2024-12-09 04:16:04.232953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.852 [2024-12-09 04:16:04.232986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.853 [2024-12-09 04:16:04.238444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.853 [2024-12-09 04:16:04.238668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.853 [2024-12-09 04:16:04.238697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.853 [2024-12-09 04:16:04.243889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.853 [2024-12-09 04:16:04.244133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.853 [2024-12-09 04:16:04.244162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.853 [2024-12-09 04:16:04.250173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.853 [2024-12-09 04:16:04.250504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.853 [2024-12-09 04:16:04.250543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.853 [2024-12-09 04:16:04.255996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.853 [2024-12-09 04:16:04.256369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.853 [2024-12-09 04:16:04.256398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.853 [2024-12-09 04:16:04.261751] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.853 [2024-12-09 04:16:04.262065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.853 [2024-12-09 04:16:04.262096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.853 [2024-12-09 04:16:04.267302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.853 [2024-12-09 04:16:04.267526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.853 [2024-12-09 04:16:04.267554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.853 [2024-12-09 04:16:04.272879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.853 [2024-12-09 04:16:04.273112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.853 [2024-12-09 04:16:04.273141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.853 [2024-12-09 04:16:04.278435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.853 [2024-12-09 04:16:04.278699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.853 [2024-12-09 04:16:04.278728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.853 [2024-12-09 04:16:04.284186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.853 [2024-12-09 04:16:04.284398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.853 [2024-12-09 04:16:04.284427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.853 [2024-12-09 04:16:04.289862] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.853 [2024-12-09 04:16:04.290113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.853 [2024-12-09 04:16:04.290141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.853 [2024-12-09 04:16:04.295635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.853 [2024-12-09 04:16:04.295924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.853 [2024-12-09 04:16:04.295953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.853 [2024-12-09 04:16:04.301195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.853 [2024-12-09 04:16:04.301440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.853 [2024-12-09 04:16:04.301470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.853 [2024-12-09 04:16:04.306926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.853 [2024-12-09 04:16:04.307140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.853 [2024-12-09 04:16:04.307184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.853 [2024-12-09 04:16:04.312550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.853 [2024-12-09 04:16:04.312848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.853 [2024-12-09 04:16:04.312883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.853 [2024-12-09 04:16:04.318117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.853 [2024-12-09 04:16:04.318315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.853 [2024-12-09 04:16:04.318346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.853 [2024-12-09 04:16:04.323724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.853 [2024-12-09 04:16:04.324027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.853 [2024-12-09 04:16:04.324079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.853 [2024-12-09 04:16:04.329220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.853 [2024-12-09 04:16:04.329487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.853 [2024-12-09 04:16:04.329517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.853 [2024-12-09 04:16:04.334718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.853 [2024-12-09 04:16:04.334894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.853 [2024-12-09 04:16:04.334922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.853 [2024-12-09 04:16:04.340224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.853 [2024-12-09 04:16:04.340541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.853 [2024-12-09 04:16:04.340570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.853 [2024-12-09 04:16:04.345817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.853 [2024-12-09 04:16:04.346129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.853 [2024-12-09 04:16:04.346156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.853 [2024-12-09 04:16:04.351220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.854 [2024-12-09 04:16:04.351475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.854 [2024-12-09 04:16:04.351511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.854 [2024-12-09 04:16:04.356713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.854 [2024-12-09 04:16:04.357014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.854 [2024-12-09 04:16:04.357043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.854 [2024-12-09 04:16:04.362425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.854 [2024-12-09 04:16:04.362766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.854 [2024-12-09 04:16:04.362795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.854 [2024-12-09 04:16:04.368095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.854 [2024-12-09 04:16:04.368367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.854 [2024-12-09 04:16:04.368396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.854 [2024-12-09 04:16:04.373593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.854 [2024-12-09 04:16:04.373814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.854 [2024-12-09 04:16:04.373842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.854 [2024-12-09 04:16:04.379147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.854 [2024-12-09 04:16:04.379350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.854 [2024-12-09 04:16:04.379379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.854 [2024-12-09 04:16:04.385007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.854 [2024-12-09 04:16:04.385170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.854 [2024-12-09 04:16:04.385198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.854 [2024-12-09 04:16:04.390748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.854 [2024-12-09 04:16:04.390968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.854 [2024-12-09 04:16:04.390997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.854 [2024-12-09 04:16:04.396478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.854 [2024-12-09 04:16:04.396707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.854 [2024-12-09 04:16:04.396741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.854 [2024-12-09 04:16:04.402140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.854 [2024-12-09 04:16:04.402359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.854 [2024-12-09 04:16:04.402388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:35.854 [2024-12-09 04:16:04.407763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.854 [2024-12-09 04:16:04.407904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.854 [2024-12-09 04:16:04.407932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:35.854 [2024-12-09 04:16:04.413454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.854 [2024-12-09 04:16:04.413636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.854 [2024-12-09 04:16:04.413664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:35.854 [2024-12-09 04:16:04.419162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.854 [2024-12-09 04:16:04.419368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.854 [2024-12-09 04:16:04.419403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:35.854 [2024-12-09 04:16:04.424661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:35.854 [2024-12-09 04:16:04.424886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:35.854 [2024-12-09 04:16:04.424916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:36.113 [2024-12-09 04:16:04.430106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.113 [2024-12-09 04:16:04.430277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.113 [2024-12-09 04:16:04.430306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.113 [2024-12-09 04:16:04.435807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.113 [2024-12-09 04:16:04.436003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.113 [2024-12-09 04:16:04.436031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:36.113 [2024-12-09 04:16:04.441157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.113 [2024-12-09 04:16:04.441400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.113 [2024-12-09 04:16:04.441430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:36.113 [2024-12-09 04:16:04.446632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.113 [2024-12-09 04:16:04.446856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.446889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:36.114 [2024-12-09 04:16:04.452325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.114 [2024-12-09 04:16:04.452498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.452527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.114 [2024-12-09 04:16:04.458059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.114 [2024-12-09 04:16:04.458289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.458318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:36.114 [2024-12-09 04:16:04.463503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.114 [2024-12-09 04:16:04.463689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.463719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:36.114 [2024-12-09 04:16:04.468973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.114 [2024-12-09 04:16:04.469209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.469251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:36.114 [2024-12-09 04:16:04.474582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.114 [2024-12-09 04:16:04.474809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.474838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.114 [2024-12-09 04:16:04.480264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.114 [2024-12-09 04:16:04.480513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.480542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:36.114 [2024-12-09 04:16:04.486190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.114 [2024-12-09 04:16:04.486427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.486457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:36.114 [2024-12-09 04:16:04.491724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.114 [2024-12-09 04:16:04.491945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.491974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:36.114 [2024-12-09 04:16:04.497308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.114 [2024-12-09 04:16:04.497540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.497569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.114 [2024-12-09 04:16:04.502725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.114 [2024-12-09 04:16:04.502945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.502974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:36.114 [2024-12-09 04:16:04.508378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.114 [2024-12-09 04:16:04.508641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.508670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:36.114 [2024-12-09 04:16:04.514177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.114 [2024-12-09 04:16:04.514365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.514399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:36.114 [2024-12-09 04:16:04.519925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.114 [2024-12-09 04:16:04.520233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.520283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.114 [2024-12-09 04:16:04.525360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.114 [2024-12-09 04:16:04.525506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.525534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:36.114 [2024-12-09 04:16:04.530712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.114 [2024-12-09 04:16:04.530946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.530989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:36.114 [2024-12-09 04:16:04.536300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.114 [2024-12-09 04:16:04.536529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.536574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:36.114 [2024-12-09 04:16:04.541813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.114 [2024-12-09 04:16:04.542034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.542061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.114 [2024-12-09 04:16:04.547380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.114 [2024-12-09 04:16:04.547638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.547666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:36.114 [2024-12-09 04:16:04.553132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.114 [2024-12-09 04:16:04.553410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.553443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:36.114 [2024-12-09 04:16:04.558785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.114 [2024-12-09 04:16:04.559059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.559086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:36.114 [2024-12-09 04:16:04.564209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.114 [2024-12-09 04:16:04.564425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.114 [2024-12-09 04:16:04.564454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.115 [2024-12-09 04:16:04.569849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.115 [2024-12-09 04:16:04.570101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.115 [2024-12-09 04:16:04.570129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:36.115 [2024-12-09 04:16:04.575367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.115 [2024-12-09 04:16:04.575508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.115 [2024-12-09 04:16:04.575537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:36.115 [2024-12-09 04:16:04.580790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.115 [2024-12-09 04:16:04.581047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.115 [2024-12-09 04:16:04.581075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:36.115 [2024-12-09 04:16:04.586135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.115 [2024-12-09 04:16:04.586394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.115 [2024-12-09 04:16:04.586423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.115 [2024-12-09 04:16:04.591896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.115 [2024-12-09 04:16:04.592065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.115 [2024-12-09 04:16:04.592093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:36.115 [2024-12-09 04:16:04.597395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.115 [2024-12-09 04:16:04.597638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.115 [2024-12-09 04:16:04.597666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:36.115 [2024-12-09 04:16:04.603148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.115 [2024-12-09 04:16:04.603408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.115 [2024-12-09 04:16:04.603437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:36.115 [2024-12-09 04:16:04.608735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.115 [2024-12-09 04:16:04.608986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.115 [2024-12-09 04:16:04.609014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.115 [2024-12-09 04:16:04.614377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.115 [2024-12-09 04:16:04.614568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.115 [2024-12-09 04:16:04.614620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:36.115 [2024-12-09 04:16:04.619985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.115 [2024-12-09 04:16:04.620135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.115 [2024-12-09 04:16:04.620162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:36.115 [2024-12-09 04:16:04.625684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.115 [2024-12-09 04:16:04.625870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.115 [2024-12-09 04:16:04.625897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:36.115 [2024-12-09 04:16:04.631326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.115 [2024-12-09 04:16:04.631542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.115 [2024-12-09 04:16:04.631586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:36.115 [2024-12-09 04:16:04.636845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.115 [2024-12-09 04:16:04.637024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.115 [2024-12-09 04:16:04.637052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:36.115 [2024-12-09 04:16:04.642264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2100170) with pdu=0x200016eff3c8 00:25:36.115 [2024-12-09 04:16:04.644033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:36.115 [2024-12-09 04:16:04.644064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:36.115 5161.00 IOPS, 645.12 MiB/s 00:25:36.115 Latency(us) 00:25:36.115 [2024-12-09T03:16:04.691Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.115 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:36.115 nvme0n1 : 2.00 5158.88 644.86 0.00 0.00 3093.23 2366.58 12233.39 00:25:36.115 [2024-12-09T03:16:04.691Z] =================================================================================================================== 00:25:36.115 [2024-12-09T03:16:04.691Z] Total : 5158.88 644.86 0.00 0.00 3093.23 2366.58 12233.39 00:25:36.115 { 00:25:36.115 "results": [ 00:25:36.115 { 00:25:36.115 "job": "nvme0n1", 00:25:36.115 "core_mask": "0x2", 00:25:36.115 "workload": "randwrite", 00:25:36.115 "status": "finished", 00:25:36.115 "queue_depth": 16, 00:25:36.115 "io_size": 131072, 00:25:36.115 "runtime": 2.003923, 00:25:36.115 "iops": 5158.88085520252, 00:25:36.115 "mibps": 644.860106900315, 00:25:36.115 "io_failed": 0, 00:25:36.115 "io_timeout": 0, 00:25:36.115 "avg_latency_us": 3093.232355280411, 00:25:36.115 "min_latency_us": 2366.5777777777776, 00:25:36.115 "max_latency_us": 12233.386666666667 00:25:36.115 } 00:25:36.115 ], 00:25:36.115 "core_count": 1 00:25:36.115 } 00:25:36.115 04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:36.115 04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:36.115 04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:36.115 | .driver_specific 00:25:36.115 | .nvme_error 00:25:36.115 | .status_code 00:25:36.115 | .command_transient_transport_error' 00:25:36.116 04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:36.374 04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 334 > 0 )) 00:25:36.374 04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 338599 00:25:36.374 04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 338599 ']' 00:25:36.374 04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 338599 00:25:36.374 04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:36.374 04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:36.374 04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 338599 00:25:36.632 04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:36.632 04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:36.632 04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 338599' 00:25:36.632 killing process with pid 338599 00:25:36.632 04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 338599 00:25:36.632 Received shutdown signal, test time was about 2.000000 seconds 00:25:36.632 00:25:36.632 Latency(us) 00:25:36.632 [2024-12-09T03:16:05.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:36.632 [2024-12-09T03:16:05.208Z] =================================================================================================================== 00:25:36.632 [2024-12-09T03:16:05.208Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:36.632 04:16:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 338599 00:25:36.890 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 337208 00:25:36.890 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 337208 ']' 00:25:36.890 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 337208 00:25:36.890 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:36.890 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:36.890 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 337208 00:25:36.890 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:36.890 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:36.890 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 337208' 00:25:36.890 killing process with pid 337208 00:25:36.890 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 337208 00:25:36.890 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 337208 00:25:37.162 00:25:37.162 real 0m15.661s 00:25:37.162 user 0m31.332s 00:25:37.162 sys 0m4.378s 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:37.162 ************************************ 00:25:37.162 END TEST nvmf_digest_error 00:25:37.162 ************************************ 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:37.162 rmmod nvme_tcp 00:25:37.162 rmmod nvme_fabrics 00:25:37.162 rmmod nvme_keyring 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 337208 ']' 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 337208 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 337208 ']' 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 337208 00:25:37.162 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (337208) - No such process 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 337208 is not found' 00:25:37.162 Process with pid 337208 is not found 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.162 04:16:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.250 04:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:39.250 00:25:39.250 real 0m35.899s 00:25:39.250 user 1m3.314s 00:25:39.250 sys 0m10.500s 00:25:39.250 04:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:39.250 04:16:07 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:39.250 ************************************ 00:25:39.250 END TEST nvmf_digest 00:25:39.250 ************************************ 00:25:39.250 04:16:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:25:39.250 04:16:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:25:39.250 04:16:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:25:39.250 04:16:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:39.250 04:16:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:39.250 04:16:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:39.250 04:16:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.250 ************************************ 00:25:39.250 START TEST nvmf_bdevperf 00:25:39.250 ************************************ 00:25:39.250 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:39.250 * Looking for test storage... 00:25:39.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:39.250 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:39.250 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:25:39.250 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:39.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.555 --rc genhtml_branch_coverage=1 00:25:39.555 --rc genhtml_function_coverage=1 00:25:39.555 --rc genhtml_legend=1 00:25:39.555 --rc geninfo_all_blocks=1 00:25:39.555 --rc geninfo_unexecuted_blocks=1 00:25:39.555 00:25:39.555 ' 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:39.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.555 --rc genhtml_branch_coverage=1 00:25:39.555 --rc genhtml_function_coverage=1 00:25:39.555 --rc genhtml_legend=1 00:25:39.555 --rc geninfo_all_blocks=1 00:25:39.555 --rc geninfo_unexecuted_blocks=1 00:25:39.555 00:25:39.555 ' 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:39.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.555 --rc genhtml_branch_coverage=1 00:25:39.555 --rc genhtml_function_coverage=1 00:25:39.555 --rc genhtml_legend=1 00:25:39.555 --rc geninfo_all_blocks=1 00:25:39.555 --rc geninfo_unexecuted_blocks=1 00:25:39.555 00:25:39.555 ' 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:39.555 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:39.555 --rc genhtml_branch_coverage=1 00:25:39.555 --rc genhtml_function_coverage=1 00:25:39.555 --rc genhtml_legend=1 00:25:39.555 --rc geninfo_all_blocks=1 00:25:39.555 --rc geninfo_unexecuted_blocks=1 00:25:39.555 00:25:39.555 ' 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:39.555 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:39.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:39.556 04:16:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:41.486 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:41.486 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:41.486 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:41.486 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:41.487 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:41.487 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:41.487 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:41.487 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:41.487 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:41.487 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:41.487 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:41.487 04:16:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:41.487 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:41.487 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:41.487 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:41.487 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:41.487 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:41.487 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:41.487 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:41.487 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:41.487 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:41.487 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:41.487 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:41.487 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:41.487 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:41.487 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:41.487 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:41.487 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:41.487 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:41.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:41.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:25:41.744 00:25:41.744 --- 10.0.0.2 ping statistics --- 00:25:41.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.744 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:41.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:41.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:25:41.744 00:25:41.744 --- 10.0.0.1 ping statistics --- 00:25:41.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:41.744 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=341089 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 341089 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 341089 ']' 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:41.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:41.744 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:41.745 [2024-12-09 04:16:10.271804] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:25:41.745 [2024-12-09 04:16:10.271903] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.002 [2024-12-09 04:16:10.347203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:42.002 [2024-12-09 04:16:10.406849] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.002 [2024-12-09 04:16:10.406914] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.002 [2024-12-09 04:16:10.406927] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.002 [2024-12-09 04:16:10.406938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.002 [2024-12-09 04:16:10.406948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.002 [2024-12-09 04:16:10.408435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:42.002 [2024-12-09 04:16:10.408459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:42.002 [2024-12-09 04:16:10.408463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.002 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:42.002 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:42.002 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:42.002 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:42.002 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:42.002 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:42.002 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:42.002 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.002 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:42.002 [2024-12-09 04:16:10.553998] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:42.002 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.002 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:42.002 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.002 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:42.260 Malloc0 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:42.260 [2024-12-09 04:16:10.621425] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:42.260 { 00:25:42.260 "params": { 00:25:42.260 "name": "Nvme$subsystem", 00:25:42.260 "trtype": "$TEST_TRANSPORT", 00:25:42.260 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:42.260 "adrfam": "ipv4", 00:25:42.260 "trsvcid": "$NVMF_PORT", 00:25:42.260 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:42.260 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:42.260 "hdgst": ${hdgst:-false}, 00:25:42.260 "ddgst": ${ddgst:-false} 00:25:42.260 }, 00:25:42.260 "method": "bdev_nvme_attach_controller" 00:25:42.260 } 00:25:42.260 EOF 00:25:42.260 )") 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:42.260 04:16:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:42.260 "params": { 00:25:42.260 "name": "Nvme1", 00:25:42.260 "trtype": "tcp", 00:25:42.260 "traddr": "10.0.0.2", 00:25:42.260 "adrfam": "ipv4", 00:25:42.260 "trsvcid": "4420", 00:25:42.260 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:42.260 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:42.260 "hdgst": false, 00:25:42.260 "ddgst": false 00:25:42.260 }, 00:25:42.260 "method": "bdev_nvme_attach_controller" 00:25:42.260 }' 00:25:42.260 [2024-12-09 04:16:10.671204] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:25:42.260 [2024-12-09 04:16:10.671330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid341123 ] 00:25:42.260 [2024-12-09 04:16:10.739864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:42.260 [2024-12-09 04:16:10.801490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.826 Running I/O for 1 seconds... 00:25:43.759 8374.00 IOPS, 32.71 MiB/s 00:25:43.759 Latency(us) 00:25:43.759 [2024-12-09T03:16:12.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.759 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:43.759 Verification LBA range: start 0x0 length 0x4000 00:25:43.759 Nvme1n1 : 1.05 8117.89 31.71 0.00 0.00 15113.63 3422.44 43302.31 00:25:43.759 [2024-12-09T03:16:12.335Z] =================================================================================================================== 00:25:43.759 [2024-12-09T03:16:12.335Z] Total : 8117.89 31.71 0.00 0.00 15113.63 3422.44 43302.31 00:25:44.017 04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=341384 00:25:44.017 04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:44.017 04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:44.017 04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:44.017 04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:25:44.017 04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:25:44.017 04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:25:44.017 04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:25:44.017 { 00:25:44.017 "params": { 00:25:44.017 "name": "Nvme$subsystem", 00:25:44.017 "trtype": "$TEST_TRANSPORT", 00:25:44.017 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:44.017 "adrfam": "ipv4", 00:25:44.017 "trsvcid": "$NVMF_PORT", 00:25:44.017 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:44.017 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:44.017 "hdgst": ${hdgst:-false}, 00:25:44.017 "ddgst": ${ddgst:-false} 00:25:44.017 }, 00:25:44.017 "method": "bdev_nvme_attach_controller" 00:25:44.017 } 00:25:44.017 EOF 00:25:44.017 )") 00:25:44.017 04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:25:44.017 04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:25:44.017 04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:25:44.017 04:16:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:25:44.017 "params": { 00:25:44.017 "name": "Nvme1", 00:25:44.017 "trtype": "tcp", 00:25:44.017 "traddr": "10.0.0.2", 00:25:44.017 "adrfam": "ipv4", 00:25:44.017 "trsvcid": "4420", 00:25:44.017 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:44.017 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:44.017 "hdgst": false, 00:25:44.017 "ddgst": false 00:25:44.017 }, 00:25:44.017 "method": "bdev_nvme_attach_controller" 00:25:44.017 }' 00:25:44.017 [2024-12-09 04:16:12.471769] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:25:44.017 [2024-12-09 04:16:12.471838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid341384 ] 00:25:44.018 [2024-12-09 04:16:12.539210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.275 [2024-12-09 04:16:12.598428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.275 Running I/O for 15 seconds... 00:25:46.582 8059.00 IOPS, 31.48 MiB/s [2024-12-09T03:16:15.725Z] 8167.00 IOPS, 31.90 MiB/s [2024-12-09T03:16:15.725Z] 04:16:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 341089 00:25:47.149 04:16:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:47.149 [2024-12-09 04:16:15.436000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.149 [2024-12-09 04:16:15.436061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.149 [2024-12-09 04:16:15.436106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.149 [2024-12-09 04:16:15.436123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.149 [2024-12-09 04:16:15.436142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.149 [2024-12-09 04:16:15.436157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.149 [2024-12-09 04:16:15.436173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.149 [2024-12-09 04:16:15.436188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.149 [2024-12-09 04:16:15.436204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.149 [2024-12-09 04:16:15.436219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.149 [2024-12-09 04:16:15.436235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.149 [2024-12-09 04:16:15.436249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.149 [2024-12-09 04:16:15.436290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.149 [2024-12-09 04:16:15.436305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.149 [2024-12-09 04:16:15.436323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.149 [2024-12-09 04:16:15.436347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.149 [2024-12-09 04:16:15.436364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:39304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.149 [2024-12-09 04:16:15.436378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.149 [2024-12-09 04:16:15.436396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.149 [2024-12-09 04:16:15.436410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.149 [2024-12-09 04:16:15.436426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:39320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.149 [2024-12-09 04:16:15.436441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.149 [2024-12-09 04:16:15.436457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.149 [2024-12-09 04:16:15.436472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.436487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.150 [2024-12-09 04:16:15.436502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.436518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.150 [2024-12-09 04:16:15.436532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.436547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:39352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.150 [2024-12-09 04:16:15.436585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.436601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:39360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.150 [2024-12-09 04:16:15.436614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.436628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.150 [2024-12-09 04:16:15.436656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.436671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.150 [2024-12-09 04:16:15.436684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.436699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:39384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.150 [2024-12-09 04:16:15.436711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.436726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.150 [2024-12-09 04:16:15.436738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.436756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:39400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.150 [2024-12-09 04:16:15.436770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.436784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.150 [2024-12-09 04:16:15.436797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.436827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:39416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.150 [2024-12-09 04:16:15.436840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.436853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.150 [2024-12-09 04:16:15.436865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.436879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.150 [2024-12-09 04:16:15.436892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.436905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:39440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.150 [2024-12-09 04:16:15.436917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.436946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.150 [2024-12-09 04:16:15.436957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.436970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.150 [2024-12-09 04:16:15.436983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.436996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:39464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.150 [2024-12-09 04:16:15.437008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.437021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.150 [2024-12-09 04:16:15.437032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.437046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.150 [2024-12-09 04:16:15.437058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.437071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.150 [2024-12-09 04:16:15.437083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.437096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:39496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.150 [2024-12-09 04:16:15.437109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.437126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.150 [2024-12-09 04:16:15.437139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.437153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.150 [2024-12-09 04:16:15.437165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.437178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.150 [2024-12-09 04:16:15.437190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.437203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.150 [2024-12-09 04:16:15.437215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.437228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.150 [2024-12-09 04:16:15.437240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.437268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.150 [2024-12-09 04:16:15.437291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.437308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.150 [2024-12-09 04:16:15.437321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.437337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.150 [2024-12-09 04:16:15.437350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.437366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.150 [2024-12-09 04:16:15.437379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.437394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.150 [2024-12-09 04:16:15.437407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.437422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.150 [2024-12-09 04:16:15.437436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.437451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.150 [2024-12-09 04:16:15.437464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.437479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.150 [2024-12-09 04:16:15.437497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.437513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.150 [2024-12-09 04:16:15.437526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.437541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.150 [2024-12-09 04:16:15.437555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.150 [2024-12-09 04:16:15.437584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.150 [2024-12-09 04:16:15.437597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.437612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.437624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.437653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.437666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.437679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.437691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.437704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:38752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.437716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.437730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.437742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.437755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:38768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.437767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.437781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.437793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.437806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.437818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.437831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.437843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.437859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.437871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.437885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.437897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.437911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.437922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.437936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.437947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.437961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.437972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.437986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.437997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.438023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.438048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.438074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:38872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.438099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.438125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:38888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.438150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.438178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.438204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.438229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.438270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.438311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.438340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.438369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.438397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.438426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:38968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.438454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.151 [2024-12-09 04:16:15.438483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.151 [2024-12-09 04:16:15.438513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.151 [2024-12-09 04:16:15.438542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.151 [2024-12-09 04:16:15.438601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.151 [2024-12-09 04:16:15.438650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.151 [2024-12-09 04:16:15.438676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.151 [2024-12-09 04:16:15.438689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.151 [2024-12-09 04:16:15.438701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.438715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:47.152 [2024-12-09 04:16:15.438727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.438740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.438752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.438765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:38992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.438778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.438791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.438803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.438816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.438828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.438842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.438854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.438868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.438880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.438893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.438905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.438918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.438930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.438947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.438959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.438978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.438991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:39192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.152 [2024-12-09 04:16:15.439767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.152 [2024-12-09 04:16:15.439780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.153 [2024-12-09 04:16:15.439792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.153 [2024-12-09 04:16:15.439805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:47.153 [2024-12-09 04:16:15.439816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.153 [2024-12-09 04:16:15.439829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e673a0 is same with the state(6) to be set 00:25:47.153 [2024-12-09 04:16:15.439843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:47.153 [2024-12-09 04:16:15.439853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:47.153 [2024-12-09 04:16:15.439863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39296 len:8 PRP1 0x0 PRP2 0x0 00:25:47.153 [2024-12-09 04:16:15.439874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.153 [2024-12-09 04:16:15.439998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.153 [2024-12-09 04:16:15.440019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.153 [2024-12-09 04:16:15.440033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.153 [2024-12-09 04:16:15.440046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.153 [2024-12-09 04:16:15.440074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.153 [2024-12-09 04:16:15.440087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.153 [2024-12-09 04:16:15.440100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.153 [2024-12-09 04:16:15.440113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.153 [2024-12-09 04:16:15.440125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.153 [2024-12-09 04:16:15.443401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.153 [2024-12-09 04:16:15.443443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.153 [2024-12-09 04:16:15.444090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.153 [2024-12-09 04:16:15.444120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.153 [2024-12-09 04:16:15.444136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.153 [2024-12-09 04:16:15.444392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.153 [2024-12-09 04:16:15.444621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.153 [2024-12-09 04:16:15.444655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.153 [2024-12-09 04:16:15.444670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.153 [2024-12-09 04:16:15.444684] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.153 [2024-12-09 04:16:15.456797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.153 [2024-12-09 04:16:15.457172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.153 [2024-12-09 04:16:15.457216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.153 [2024-12-09 04:16:15.457232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.153 [2024-12-09 04:16:15.457505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.153 [2024-12-09 04:16:15.457738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.153 [2024-12-09 04:16:15.457757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.153 [2024-12-09 04:16:15.457769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.153 [2024-12-09 04:16:15.457780] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.153 [2024-12-09 04:16:15.469929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.153 [2024-12-09 04:16:15.470338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.153 [2024-12-09 04:16:15.470366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.153 [2024-12-09 04:16:15.470382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.153 [2024-12-09 04:16:15.470610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.153 [2024-12-09 04:16:15.470821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.153 [2024-12-09 04:16:15.470839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.153 [2024-12-09 04:16:15.470851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.153 [2024-12-09 04:16:15.470862] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.153 [2024-12-09 04:16:15.483156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.153 [2024-12-09 04:16:15.483516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.153 [2024-12-09 04:16:15.483544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.153 [2024-12-09 04:16:15.483560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.153 [2024-12-09 04:16:15.483793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.153 [2024-12-09 04:16:15.484004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.153 [2024-12-09 04:16:15.484023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.153 [2024-12-09 04:16:15.484035] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.153 [2024-12-09 04:16:15.484046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.153 [2024-12-09 04:16:15.496349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.153 [2024-12-09 04:16:15.496800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.153 [2024-12-09 04:16:15.496842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.153 [2024-12-09 04:16:15.496858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.153 [2024-12-09 04:16:15.497101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.153 [2024-12-09 04:16:15.497356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.153 [2024-12-09 04:16:15.497377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.153 [2024-12-09 04:16:15.497391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.153 [2024-12-09 04:16:15.497403] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.153 [2024-12-09 04:16:15.509467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.153 [2024-12-09 04:16:15.509851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.153 [2024-12-09 04:16:15.509892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.153 [2024-12-09 04:16:15.509907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.153 [2024-12-09 04:16:15.510158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.153 [2024-12-09 04:16:15.510381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.153 [2024-12-09 04:16:15.510401] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.153 [2024-12-09 04:16:15.510413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.153 [2024-12-09 04:16:15.510425] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.153 [2024-12-09 04:16:15.522647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.153 [2024-12-09 04:16:15.523015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.153 [2024-12-09 04:16:15.523057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.153 [2024-12-09 04:16:15.523072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.153 [2024-12-09 04:16:15.523356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.153 [2024-12-09 04:16:15.523565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.153 [2024-12-09 04:16:15.523584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.153 [2024-12-09 04:16:15.523597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.153 [2024-12-09 04:16:15.523609] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.153 [2024-12-09 04:16:15.535745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.153 [2024-12-09 04:16:15.536246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.154 [2024-12-09 04:16:15.536296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.154 [2024-12-09 04:16:15.536314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.154 [2024-12-09 04:16:15.536556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.154 [2024-12-09 04:16:15.536784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.154 [2024-12-09 04:16:15.536802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.154 [2024-12-09 04:16:15.536814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.154 [2024-12-09 04:16:15.536825] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.154 [2024-12-09 04:16:15.548773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.154 [2024-12-09 04:16:15.549159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.154 [2024-12-09 04:16:15.549201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.154 [2024-12-09 04:16:15.549217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.154 [2024-12-09 04:16:15.549478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.154 [2024-12-09 04:16:15.549710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.154 [2024-12-09 04:16:15.549729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.154 [2024-12-09 04:16:15.549741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.154 [2024-12-09 04:16:15.549752] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.154 [2024-12-09 04:16:15.561848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.154 [2024-12-09 04:16:15.562184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.154 [2024-12-09 04:16:15.562212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.154 [2024-12-09 04:16:15.562227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.154 [2024-12-09 04:16:15.562497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.154 [2024-12-09 04:16:15.562713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.154 [2024-12-09 04:16:15.562737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.154 [2024-12-09 04:16:15.562750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.154 [2024-12-09 04:16:15.562761] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.154 [2024-12-09 04:16:15.575097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.154 [2024-12-09 04:16:15.575467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.154 [2024-12-09 04:16:15.575510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.154 [2024-12-09 04:16:15.575526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.154 [2024-12-09 04:16:15.575777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.154 [2024-12-09 04:16:15.575987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.154 [2024-12-09 04:16:15.576005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.154 [2024-12-09 04:16:15.576017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.154 [2024-12-09 04:16:15.576029] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.154 [2024-12-09 04:16:15.588332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.154 [2024-12-09 04:16:15.588730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.154 [2024-12-09 04:16:15.588771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.154 [2024-12-09 04:16:15.588786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.154 [2024-12-09 04:16:15.589038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.154 [2024-12-09 04:16:15.589249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.154 [2024-12-09 04:16:15.589291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.154 [2024-12-09 04:16:15.589304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.154 [2024-12-09 04:16:15.589331] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.154 [2024-12-09 04:16:15.601414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.154 [2024-12-09 04:16:15.601782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.154 [2024-12-09 04:16:15.601823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.154 [2024-12-09 04:16:15.601839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.154 [2024-12-09 04:16:15.602090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.154 [2024-12-09 04:16:15.602326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.154 [2024-12-09 04:16:15.602360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.154 [2024-12-09 04:16:15.602374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.154 [2024-12-09 04:16:15.602391] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.154 [2024-12-09 04:16:15.614586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.154 [2024-12-09 04:16:15.614955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.154 [2024-12-09 04:16:15.614998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.154 [2024-12-09 04:16:15.615014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.154 [2024-12-09 04:16:15.615280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.154 [2024-12-09 04:16:15.615503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.154 [2024-12-09 04:16:15.615523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.154 [2024-12-09 04:16:15.615536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.154 [2024-12-09 04:16:15.615548] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.154 [2024-12-09 04:16:15.627965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.154 [2024-12-09 04:16:15.628276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.154 [2024-12-09 04:16:15.628318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.154 [2024-12-09 04:16:15.628334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.154 [2024-12-09 04:16:15.628559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.154 [2024-12-09 04:16:15.628781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.154 [2024-12-09 04:16:15.628800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.154 [2024-12-09 04:16:15.628812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.154 [2024-12-09 04:16:15.628824] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.154 [2024-12-09 04:16:15.641063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.154 [2024-12-09 04:16:15.641497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.154 [2024-12-09 04:16:15.641539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.154 [2024-12-09 04:16:15.641557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.154 [2024-12-09 04:16:15.641799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.154 [2024-12-09 04:16:15.642010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.154 [2024-12-09 04:16:15.642028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.155 [2024-12-09 04:16:15.642039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.155 [2024-12-09 04:16:15.642050] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.155 [2024-12-09 04:16:15.654212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.155 [2024-12-09 04:16:15.654607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.155 [2024-12-09 04:16:15.654650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.155 [2024-12-09 04:16:15.654666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.155 [2024-12-09 04:16:15.654928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.155 [2024-12-09 04:16:15.655124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.155 [2024-12-09 04:16:15.655143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.155 [2024-12-09 04:16:15.655156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.155 [2024-12-09 04:16:15.655168] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.155 [2024-12-09 04:16:15.667550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.155 [2024-12-09 04:16:15.667894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.155 [2024-12-09 04:16:15.667921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.155 [2024-12-09 04:16:15.667936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.155 [2024-12-09 04:16:15.668157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.155 [2024-12-09 04:16:15.668401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.155 [2024-12-09 04:16:15.668422] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.155 [2024-12-09 04:16:15.668434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.155 [2024-12-09 04:16:15.668446] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.155 [2024-12-09 04:16:15.681101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.155 [2024-12-09 04:16:15.681530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.155 [2024-12-09 04:16:15.681559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.155 [2024-12-09 04:16:15.681575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.155 [2024-12-09 04:16:15.681838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.155 [2024-12-09 04:16:15.682057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.155 [2024-12-09 04:16:15.682077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.155 [2024-12-09 04:16:15.682090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.155 [2024-12-09 04:16:15.682103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.155 [2024-12-09 04:16:15.694418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.155 [2024-12-09 04:16:15.694830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.155 [2024-12-09 04:16:15.694857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.155 [2024-12-09 04:16:15.694873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.155 [2024-12-09 04:16:15.695110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.155 [2024-12-09 04:16:15.695355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.155 [2024-12-09 04:16:15.695377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.155 [2024-12-09 04:16:15.695391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.155 [2024-12-09 04:16:15.695404] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.155 [2024-12-09 04:16:15.708199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.155 [2024-12-09 04:16:15.708564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.155 [2024-12-09 04:16:15.708617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.155 [2024-12-09 04:16:15.708633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.155 [2024-12-09 04:16:15.708869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.155 [2024-12-09 04:16:15.709064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.155 [2024-12-09 04:16:15.709083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.155 [2024-12-09 04:16:15.709095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.155 [2024-12-09 04:16:15.709106] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.414 [2024-12-09 04:16:15.722073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.414 [2024-12-09 04:16:15.722446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.414 [2024-12-09 04:16:15.722475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.414 [2024-12-09 04:16:15.722492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.414 [2024-12-09 04:16:15.722727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.414 [2024-12-09 04:16:15.722946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.414 [2024-12-09 04:16:15.722965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.414 [2024-12-09 04:16:15.722977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.414 [2024-12-09 04:16:15.722989] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.414 [2024-12-09 04:16:15.735571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.414 [2024-12-09 04:16:15.735957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.414 [2024-12-09 04:16:15.735986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.414 [2024-12-09 04:16:15.736002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.414 [2024-12-09 04:16:15.736247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.414 [2024-12-09 04:16:15.736480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.414 [2024-12-09 04:16:15.736505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.414 [2024-12-09 04:16:15.736519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.414 [2024-12-09 04:16:15.736531] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.414 [2024-12-09 04:16:15.748749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.414 [2024-12-09 04:16:15.749111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.414 [2024-12-09 04:16:15.749138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.414 [2024-12-09 04:16:15.749153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.414 [2024-12-09 04:16:15.749378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.414 [2024-12-09 04:16:15.749596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.414 [2024-12-09 04:16:15.749615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.414 [2024-12-09 04:16:15.749626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.414 [2024-12-09 04:16:15.749638] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.414 [2024-12-09 04:16:15.761927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.414 [2024-12-09 04:16:15.762300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.414 [2024-12-09 04:16:15.762348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.414 [2024-12-09 04:16:15.762363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.414 [2024-12-09 04:16:15.762600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.414 [2024-12-09 04:16:15.762812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.414 [2024-12-09 04:16:15.762831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.414 [2024-12-09 04:16:15.762843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.414 [2024-12-09 04:16:15.762854] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.414 [2024-12-09 04:16:15.775172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.414 [2024-12-09 04:16:15.775604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.414 [2024-12-09 04:16:15.775630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.414 [2024-12-09 04:16:15.775646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.414 [2024-12-09 04:16:15.775898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.414 [2024-12-09 04:16:15.776094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.414 [2024-12-09 04:16:15.776112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.414 [2024-12-09 04:16:15.776124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.414 [2024-12-09 04:16:15.776140] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.414 [2024-12-09 04:16:15.788495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.414 [2024-12-09 04:16:15.788945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.414 [2024-12-09 04:16:15.788987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.414 [2024-12-09 04:16:15.789002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.414 [2024-12-09 04:16:15.789258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.414 [2024-12-09 04:16:15.789489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.414 [2024-12-09 04:16:15.789509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.414 [2024-12-09 04:16:15.789522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.414 [2024-12-09 04:16:15.789534] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.414 [2024-12-09 04:16:15.801759] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.414 [2024-12-09 04:16:15.802168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.414 [2024-12-09 04:16:15.802233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.414 [2024-12-09 04:16:15.802248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.414 [2024-12-09 04:16:15.802509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.414 [2024-12-09 04:16:15.802743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.414 [2024-12-09 04:16:15.802762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.414 [2024-12-09 04:16:15.802773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.414 [2024-12-09 04:16:15.802785] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.414 [2024-12-09 04:16:15.814782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.414 [2024-12-09 04:16:15.815152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.414 [2024-12-09 04:16:15.815195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.414 [2024-12-09 04:16:15.815211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.414 [2024-12-09 04:16:15.815479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.414 [2024-12-09 04:16:15.815712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.414 [2024-12-09 04:16:15.815731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.414 [2024-12-09 04:16:15.815743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.414 [2024-12-09 04:16:15.815754] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.415 [2024-12-09 04:16:15.827945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.415 [2024-12-09 04:16:15.828345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.415 [2024-12-09 04:16:15.828373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.415 [2024-12-09 04:16:15.828389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.415 [2024-12-09 04:16:15.828614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.415 [2024-12-09 04:16:15.828844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.415 [2024-12-09 04:16:15.828862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.415 [2024-12-09 04:16:15.828874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.415 [2024-12-09 04:16:15.828886] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.415 [2024-12-09 04:16:15.841046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.415 [2024-12-09 04:16:15.841448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.415 [2024-12-09 04:16:15.841475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.415 [2024-12-09 04:16:15.841491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.415 [2024-12-09 04:16:15.841718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.415 [2024-12-09 04:16:15.841935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.415 [2024-12-09 04:16:15.841954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.415 [2024-12-09 04:16:15.841965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.415 [2024-12-09 04:16:15.841976] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.415 7067.00 IOPS, 27.61 MiB/s [2024-12-09T03:16:15.991Z] [2024-12-09 04:16:15.854106] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.415 [2024-12-09 04:16:15.854480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.415 [2024-12-09 04:16:15.854508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.415 [2024-12-09 04:16:15.854524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.415 [2024-12-09 04:16:15.854762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.415 [2024-12-09 04:16:15.854973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.415 [2024-12-09 04:16:15.854991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.415 [2024-12-09 04:16:15.855004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.415 [2024-12-09 04:16:15.855014] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.415 [2024-12-09 04:16:15.867259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.415 [2024-12-09 04:16:15.867633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.415 [2024-12-09 04:16:15.867662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.415 [2024-12-09 04:16:15.867678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.415 [2024-12-09 04:16:15.867927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.415 [2024-12-09 04:16:15.868138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.415 [2024-12-09 04:16:15.868156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.415 [2024-12-09 04:16:15.868169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.415 [2024-12-09 04:16:15.868180] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.415 [2024-12-09 04:16:15.880538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.415 [2024-12-09 04:16:15.881016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.415 [2024-12-09 04:16:15.881057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.415 [2024-12-09 04:16:15.881073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.415 [2024-12-09 04:16:15.881329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.415 [2024-12-09 04:16:15.881544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.415 [2024-12-09 04:16:15.881564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.415 [2024-12-09 04:16:15.881576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.415 [2024-12-09 04:16:15.881588] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.415 [2024-12-09 04:16:15.893790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.415 [2024-12-09 04:16:15.894151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.415 [2024-12-09 04:16:15.894191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.415 [2024-12-09 04:16:15.894207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.415 [2024-12-09 04:16:15.894478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.415 [2024-12-09 04:16:15.894694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.415 [2024-12-09 04:16:15.894713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.415 [2024-12-09 04:16:15.894725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.415 [2024-12-09 04:16:15.894736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.415 [2024-12-09 04:16:15.906879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.415 [2024-12-09 04:16:15.907246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.415 [2024-12-09 04:16:15.907280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.415 [2024-12-09 04:16:15.907298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.415 [2024-12-09 04:16:15.907536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.415 [2024-12-09 04:16:15.907748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.415 [2024-12-09 04:16:15.907771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.415 [2024-12-09 04:16:15.907784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.415 [2024-12-09 04:16:15.907795] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.415 [2024-12-09 04:16:15.920018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.415 [2024-12-09 04:16:15.920395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.415 [2024-12-09 04:16:15.920439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.415 [2024-12-09 04:16:15.920455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.415 [2024-12-09 04:16:15.920725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.415 [2024-12-09 04:16:15.920921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.415 [2024-12-09 04:16:15.920939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.415 [2024-12-09 04:16:15.920952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.415 [2024-12-09 04:16:15.920963] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.415 [2024-12-09 04:16:15.933236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.415 [2024-12-09 04:16:15.933649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.415 [2024-12-09 04:16:15.933691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.415 [2024-12-09 04:16:15.933708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.415 [2024-12-09 04:16:15.933946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.415 [2024-12-09 04:16:15.934141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.415 [2024-12-09 04:16:15.934159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.415 [2024-12-09 04:16:15.934171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.415 [2024-12-09 04:16:15.934183] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.415 [2024-12-09 04:16:15.946496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.415 [2024-12-09 04:16:15.946897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.415 [2024-12-09 04:16:15.946925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.415 [2024-12-09 04:16:15.946942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.415 [2024-12-09 04:16:15.947176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.416 [2024-12-09 04:16:15.947435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.416 [2024-12-09 04:16:15.947457] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.416 [2024-12-09 04:16:15.947470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.416 [2024-12-09 04:16:15.947489] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.416 [2024-12-09 04:16:15.959796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.416 [2024-12-09 04:16:15.960162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.416 [2024-12-09 04:16:15.960204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.416 [2024-12-09 04:16:15.960220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.416 [2024-12-09 04:16:15.960477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.416 [2024-12-09 04:16:15.960712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.416 [2024-12-09 04:16:15.960730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.416 [2024-12-09 04:16:15.960742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.416 [2024-12-09 04:16:15.960753] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.416 [2024-12-09 04:16:15.972968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.416 [2024-12-09 04:16:15.973358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.416 [2024-12-09 04:16:15.973401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.416 [2024-12-09 04:16:15.973416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.416 [2024-12-09 04:16:15.973670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.416 [2024-12-09 04:16:15.973881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.416 [2024-12-09 04:16:15.973899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.416 [2024-12-09 04:16:15.973911] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.416 [2024-12-09 04:16:15.973922] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.416 [2024-12-09 04:16:15.986461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.416 [2024-12-09 04:16:15.986908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.416 [2024-12-09 04:16:15.986936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.416 [2024-12-09 04:16:15.986952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.416 [2024-12-09 04:16:15.987185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.416 [2024-12-09 04:16:15.987432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.416 [2024-12-09 04:16:15.987453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.416 [2024-12-09 04:16:15.987466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.416 [2024-12-09 04:16:15.987478] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.674 [2024-12-09 04:16:15.999585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.674 [2024-12-09 04:16:16.000006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.674 [2024-12-09 04:16:16.000033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.674 [2024-12-09 04:16:16.000048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.674 [2024-12-09 04:16:16.000295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.674 [2024-12-09 04:16:16.000512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.674 [2024-12-09 04:16:16.000531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.674 [2024-12-09 04:16:16.000544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.674 [2024-12-09 04:16:16.000556] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.674 [2024-12-09 04:16:16.012750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.674 [2024-12-09 04:16:16.013167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.674 [2024-12-09 04:16:16.013228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.674 [2024-12-09 04:16:16.013244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.675 [2024-12-09 04:16:16.013511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.675 [2024-12-09 04:16:16.013740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.675 [2024-12-09 04:16:16.013758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.675 [2024-12-09 04:16:16.013770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.675 [2024-12-09 04:16:16.013782] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.675 [2024-12-09 04:16:16.026002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.675 [2024-12-09 04:16:16.026376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.675 [2024-12-09 04:16:16.026404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.675 [2024-12-09 04:16:16.026420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.675 [2024-12-09 04:16:16.026658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.675 [2024-12-09 04:16:16.026870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.675 [2024-12-09 04:16:16.026889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.675 [2024-12-09 04:16:16.026901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.675 [2024-12-09 04:16:16.026912] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.675 [2024-12-09 04:16:16.039118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.675 [2024-12-09 04:16:16.039471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.675 [2024-12-09 04:16:16.039499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.675 [2024-12-09 04:16:16.039515] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.675 [2024-12-09 04:16:16.039760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.675 [2024-12-09 04:16:16.039958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.675 [2024-12-09 04:16:16.039976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.675 [2024-12-09 04:16:16.039988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.675 [2024-12-09 04:16:16.039999] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.675 [2024-12-09 04:16:16.052185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.675 [2024-12-09 04:16:16.052577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.675 [2024-12-09 04:16:16.052619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.675 [2024-12-09 04:16:16.052635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.675 [2024-12-09 04:16:16.052872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.675 [2024-12-09 04:16:16.053082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.675 [2024-12-09 04:16:16.053101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.675 [2024-12-09 04:16:16.053113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.675 [2024-12-09 04:16:16.053125] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.675 [2024-12-09 04:16:16.065350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.675 [2024-12-09 04:16:16.065713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.675 [2024-12-09 04:16:16.065740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.675 [2024-12-09 04:16:16.065755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.675 [2024-12-09 04:16:16.065993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.675 [2024-12-09 04:16:16.066205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.675 [2024-12-09 04:16:16.066223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.675 [2024-12-09 04:16:16.066235] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.675 [2024-12-09 04:16:16.066247] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.675 [2024-12-09 04:16:16.078509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.675 [2024-12-09 04:16:16.078811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.675 [2024-12-09 04:16:16.078852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.675 [2024-12-09 04:16:16.078868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.675 [2024-12-09 04:16:16.079086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.675 [2024-12-09 04:16:16.079328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.675 [2024-12-09 04:16:16.079353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.675 [2024-12-09 04:16:16.079366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.675 [2024-12-09 04:16:16.079378] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.675 [2024-12-09 04:16:16.091587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.675 [2024-12-09 04:16:16.091971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.675 [2024-12-09 04:16:16.092012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.675 [2024-12-09 04:16:16.092028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.675 [2024-12-09 04:16:16.092256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.675 [2024-12-09 04:16:16.092495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.675 [2024-12-09 04:16:16.092515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.675 [2024-12-09 04:16:16.092527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.675 [2024-12-09 04:16:16.092539] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.675 [2024-12-09 04:16:16.104765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.675 [2024-12-09 04:16:16.105260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.675 [2024-12-09 04:16:16.105310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.675 [2024-12-09 04:16:16.105327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.675 [2024-12-09 04:16:16.105596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.675 [2024-12-09 04:16:16.105792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.675 [2024-12-09 04:16:16.105810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.675 [2024-12-09 04:16:16.105822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.675 [2024-12-09 04:16:16.105833] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.675 [2024-12-09 04:16:16.117832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.675 [2024-12-09 04:16:16.118151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.675 [2024-12-09 04:16:16.118177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.675 [2024-12-09 04:16:16.118192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.675 [2024-12-09 04:16:16.118440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.675 [2024-12-09 04:16:16.118660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.675 [2024-12-09 04:16:16.118679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.675 [2024-12-09 04:16:16.118691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.675 [2024-12-09 04:16:16.118706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.675 [2024-12-09 04:16:16.131166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.675 [2024-12-09 04:16:16.131546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.675 [2024-12-09 04:16:16.131574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.675 [2024-12-09 04:16:16.131590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.675 [2024-12-09 04:16:16.131839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.675 [2024-12-09 04:16:16.132040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.675 [2024-12-09 04:16:16.132059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.675 [2024-12-09 04:16:16.132071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.675 [2024-12-09 04:16:16.132083] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.675 [2024-12-09 04:16:16.144358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.675 [2024-12-09 04:16:16.144745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.675 [2024-12-09 04:16:16.144787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.676 [2024-12-09 04:16:16.144803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.676 [2024-12-09 04:16:16.145074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.676 [2024-12-09 04:16:16.145298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.676 [2024-12-09 04:16:16.145333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.676 [2024-12-09 04:16:16.145347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.676 [2024-12-09 04:16:16.145359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.676 [2024-12-09 04:16:16.157452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.676 [2024-12-09 04:16:16.157822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.676 [2024-12-09 04:16:16.157864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.676 [2024-12-09 04:16:16.157879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.676 [2024-12-09 04:16:16.158150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.676 [2024-12-09 04:16:16.158374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.676 [2024-12-09 04:16:16.158394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.676 [2024-12-09 04:16:16.158407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.676 [2024-12-09 04:16:16.158419] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.676 [2024-12-09 04:16:16.170596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.676 [2024-12-09 04:16:16.170962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.676 [2024-12-09 04:16:16.170989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.676 [2024-12-09 04:16:16.171019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.676 [2024-12-09 04:16:16.171264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.676 [2024-12-09 04:16:16.171504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.676 [2024-12-09 04:16:16.171525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.676 [2024-12-09 04:16:16.171538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.676 [2024-12-09 04:16:16.171550] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.676 [2024-12-09 04:16:16.183719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.676 [2024-12-09 04:16:16.184212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.676 [2024-12-09 04:16:16.184252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.676 [2024-12-09 04:16:16.184269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.676 [2024-12-09 04:16:16.184538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.676 [2024-12-09 04:16:16.184770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.676 [2024-12-09 04:16:16.184788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.676 [2024-12-09 04:16:16.184800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.676 [2024-12-09 04:16:16.184812] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.676 [2024-12-09 04:16:16.196756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.676 [2024-12-09 04:16:16.197201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.676 [2024-12-09 04:16:16.197229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.676 [2024-12-09 04:16:16.197245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.676 [2024-12-09 04:16:16.197472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.676 [2024-12-09 04:16:16.197694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.676 [2024-12-09 04:16:16.197715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.676 [2024-12-09 04:16:16.197728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.676 [2024-12-09 04:16:16.197741] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.676 [2024-12-09 04:16:16.209916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.676 [2024-12-09 04:16:16.210283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.676 [2024-12-09 04:16:16.210311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.676 [2024-12-09 04:16:16.210327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.676 [2024-12-09 04:16:16.210577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.676 [2024-12-09 04:16:16.210773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.676 [2024-12-09 04:16:16.210791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.676 [2024-12-09 04:16:16.210803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.676 [2024-12-09 04:16:16.210814] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.676 [2024-12-09 04:16:16.223184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.676 [2024-12-09 04:16:16.223642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.676 [2024-12-09 04:16:16.223684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.676 [2024-12-09 04:16:16.223700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.676 [2024-12-09 04:16:16.223955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.676 [2024-12-09 04:16:16.224166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.676 [2024-12-09 04:16:16.224184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.676 [2024-12-09 04:16:16.224196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.676 [2024-12-09 04:16:16.224207] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.676 [2024-12-09 04:16:16.236450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.676 [2024-12-09 04:16:16.236854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.676 [2024-12-09 04:16:16.236897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.676 [2024-12-09 04:16:16.236913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.676 [2024-12-09 04:16:16.237138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.676 [2024-12-09 04:16:16.237385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.676 [2024-12-09 04:16:16.237406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.676 [2024-12-09 04:16:16.237419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.676 [2024-12-09 04:16:16.237431] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.935 [2024-12-09 04:16:16.250049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.935 [2024-12-09 04:16:16.250495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-12-09 04:16:16.250524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.935 [2024-12-09 04:16:16.250541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.935 [2024-12-09 04:16:16.250783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.935 [2024-12-09 04:16:16.250994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.935 [2024-12-09 04:16:16.251018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.935 [2024-12-09 04:16:16.251031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.935 [2024-12-09 04:16:16.251042] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.935 [2024-12-09 04:16:16.263276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.935 [2024-12-09 04:16:16.263620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-12-09 04:16:16.263648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.935 [2024-12-09 04:16:16.263664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.935 [2024-12-09 04:16:16.263893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.935 [2024-12-09 04:16:16.264106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.935 [2024-12-09 04:16:16.264124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.935 [2024-12-09 04:16:16.264136] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.935 [2024-12-09 04:16:16.264148] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.935 [2024-12-09 04:16:16.276423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.935 [2024-12-09 04:16:16.276843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-12-09 04:16:16.276884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.935 [2024-12-09 04:16:16.276901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.935 [2024-12-09 04:16:16.277145] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.935 [2024-12-09 04:16:16.277375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.935 [2024-12-09 04:16:16.277396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.935 [2024-12-09 04:16:16.277409] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.935 [2024-12-09 04:16:16.277421] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.935 [2024-12-09 04:16:16.289698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.935 [2024-12-09 04:16:16.290192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.935 [2024-12-09 04:16:16.290219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.935 [2024-12-09 04:16:16.290251] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.935 [2024-12-09 04:16:16.290490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.936 [2024-12-09 04:16:16.290722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.936 [2024-12-09 04:16:16.290740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.936 [2024-12-09 04:16:16.290752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.936 [2024-12-09 04:16:16.290768] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.936 [2024-12-09 04:16:16.302932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.936 [2024-12-09 04:16:16.303387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-12-09 04:16:16.303429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.936 [2024-12-09 04:16:16.303446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.936 [2024-12-09 04:16:16.303696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.936 [2024-12-09 04:16:16.303892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.936 [2024-12-09 04:16:16.303911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.936 [2024-12-09 04:16:16.303923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.936 [2024-12-09 04:16:16.303935] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.936 [2024-12-09 04:16:16.316230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.936 [2024-12-09 04:16:16.316619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-12-09 04:16:16.316662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.936 [2024-12-09 04:16:16.316677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.936 [2024-12-09 04:16:16.316932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.936 [2024-12-09 04:16:16.317142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.936 [2024-12-09 04:16:16.317161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.936 [2024-12-09 04:16:16.317173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.936 [2024-12-09 04:16:16.317185] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.936 [2024-12-09 04:16:16.329432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.936 [2024-12-09 04:16:16.329813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-12-09 04:16:16.329854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.936 [2024-12-09 04:16:16.329870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.936 [2024-12-09 04:16:16.330095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.936 [2024-12-09 04:16:16.330335] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.936 [2024-12-09 04:16:16.330369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.936 [2024-12-09 04:16:16.330383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.936 [2024-12-09 04:16:16.330395] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.936 [2024-12-09 04:16:16.342715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.936 [2024-12-09 04:16:16.343092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-12-09 04:16:16.343135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.936 [2024-12-09 04:16:16.343151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.936 [2024-12-09 04:16:16.343420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.936 [2024-12-09 04:16:16.343643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.936 [2024-12-09 04:16:16.343662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.936 [2024-12-09 04:16:16.343675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.936 [2024-12-09 04:16:16.343686] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.936 [2024-12-09 04:16:16.355886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.936 [2024-12-09 04:16:16.356312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-12-09 04:16:16.356340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.936 [2024-12-09 04:16:16.356354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.936 [2024-12-09 04:16:16.356606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.936 [2024-12-09 04:16:16.356817] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.936 [2024-12-09 04:16:16.356835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.936 [2024-12-09 04:16:16.356847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.936 [2024-12-09 04:16:16.356858] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.936 [2024-12-09 04:16:16.368926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.936 [2024-12-09 04:16:16.369468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-12-09 04:16:16.369496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.936 [2024-12-09 04:16:16.369512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.936 [2024-12-09 04:16:16.369752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.936 [2024-12-09 04:16:16.369964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.936 [2024-12-09 04:16:16.369983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.936 [2024-12-09 04:16:16.369995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.936 [2024-12-09 04:16:16.370006] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.936 [2024-12-09 04:16:16.382155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.936 [2024-12-09 04:16:16.382552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-12-09 04:16:16.382596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.936 [2024-12-09 04:16:16.382612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.936 [2024-12-09 04:16:16.382881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.936 [2024-12-09 04:16:16.383077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.936 [2024-12-09 04:16:16.383096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.936 [2024-12-09 04:16:16.383108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.936 [2024-12-09 04:16:16.383119] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.936 [2024-12-09 04:16:16.395264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.936 [2024-12-09 04:16:16.395699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-12-09 04:16:16.395726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.936 [2024-12-09 04:16:16.395742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.936 [2024-12-09 04:16:16.395982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.936 [2024-12-09 04:16:16.396193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.936 [2024-12-09 04:16:16.396212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.936 [2024-12-09 04:16:16.396224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.936 [2024-12-09 04:16:16.396235] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.936 [2024-12-09 04:16:16.408512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.936 [2024-12-09 04:16:16.408848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-12-09 04:16:16.408876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.936 [2024-12-09 04:16:16.408891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.936 [2024-12-09 04:16:16.409111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.936 [2024-12-09 04:16:16.409350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.936 [2024-12-09 04:16:16.409371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.936 [2024-12-09 04:16:16.409384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.936 [2024-12-09 04:16:16.409396] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.936 [2024-12-09 04:16:16.421846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.936 [2024-12-09 04:16:16.422252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.936 [2024-12-09 04:16:16.422327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.936 [2024-12-09 04:16:16.422344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.937 [2024-12-09 04:16:16.422606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.937 [2024-12-09 04:16:16.422820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.937 [2024-12-09 04:16:16.422844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.937 [2024-12-09 04:16:16.422857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.937 [2024-12-09 04:16:16.422868] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.937 [2024-12-09 04:16:16.435036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.937 [2024-12-09 04:16:16.435385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-12-09 04:16:16.435414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.937 [2024-12-09 04:16:16.435430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.937 [2024-12-09 04:16:16.435662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.937 [2024-12-09 04:16:16.435873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.937 [2024-12-09 04:16:16.435892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.937 [2024-12-09 04:16:16.435904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.937 [2024-12-09 04:16:16.435915] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.937 [2024-12-09 04:16:16.448191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.937 [2024-12-09 04:16:16.448621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-12-09 04:16:16.448681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.937 [2024-12-09 04:16:16.448697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.937 [2024-12-09 04:16:16.448941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.937 [2024-12-09 04:16:16.449181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.937 [2024-12-09 04:16:16.449202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.937 [2024-12-09 04:16:16.449216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.937 [2024-12-09 04:16:16.449229] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.937 [2024-12-09 04:16:16.461525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.937 [2024-12-09 04:16:16.461896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-12-09 04:16:16.461939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.937 [2024-12-09 04:16:16.461955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.937 [2024-12-09 04:16:16.462225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.937 [2024-12-09 04:16:16.462454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.937 [2024-12-09 04:16:16.462475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.937 [2024-12-09 04:16:16.462488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.937 [2024-12-09 04:16:16.462505] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.937 [2024-12-09 04:16:16.474819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.937 [2024-12-09 04:16:16.475186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-12-09 04:16:16.475227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.937 [2024-12-09 04:16:16.475243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.937 [2024-12-09 04:16:16.475527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.937 [2024-12-09 04:16:16.475743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.937 [2024-12-09 04:16:16.475762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.937 [2024-12-09 04:16:16.475774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.937 [2024-12-09 04:16:16.475785] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.937 [2024-12-09 04:16:16.487950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.937 [2024-12-09 04:16:16.488409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-12-09 04:16:16.488438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.937 [2024-12-09 04:16:16.488454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.937 [2024-12-09 04:16:16.488713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.937 [2024-12-09 04:16:16.488909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.937 [2024-12-09 04:16:16.488928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.937 [2024-12-09 04:16:16.488940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.937 [2024-12-09 04:16:16.488951] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:47.937 [2024-12-09 04:16:16.501195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:47.937 [2024-12-09 04:16:16.501549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:47.937 [2024-12-09 04:16:16.501576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:47.937 [2024-12-09 04:16:16.501592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:47.937 [2024-12-09 04:16:16.501817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:47.937 [2024-12-09 04:16:16.502029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:47.937 [2024-12-09 04:16:16.502048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:47.937 [2024-12-09 04:16:16.502060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:47.937 [2024-12-09 04:16:16.502071] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.196 [2024-12-09 04:16:16.514724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.196 [2024-12-09 04:16:16.515109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.196 [2024-12-09 04:16:16.515138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.196 [2024-12-09 04:16:16.515154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.196 [2024-12-09 04:16:16.515383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.196 [2024-12-09 04:16:16.515662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.196 [2024-12-09 04:16:16.515682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.196 [2024-12-09 04:16:16.515695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.196 [2024-12-09 04:16:16.515707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.196 [2024-12-09 04:16:16.527930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.196 [2024-12-09 04:16:16.528300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.196 [2024-12-09 04:16:16.528342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.196 [2024-12-09 04:16:16.528358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.196 [2024-12-09 04:16:16.528627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.196 [2024-12-09 04:16:16.528823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.197 [2024-12-09 04:16:16.528842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.197 [2024-12-09 04:16:16.528854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.197 [2024-12-09 04:16:16.528865] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.197 [2024-12-09 04:16:16.541195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.197 [2024-12-09 04:16:16.541622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.197 [2024-12-09 04:16:16.541649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.197 [2024-12-09 04:16:16.541664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.197 [2024-12-09 04:16:16.541904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.197 [2024-12-09 04:16:16.542116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.197 [2024-12-09 04:16:16.542134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.197 [2024-12-09 04:16:16.542146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.197 [2024-12-09 04:16:16.542157] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.197 [2024-12-09 04:16:16.554314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.197 [2024-12-09 04:16:16.554667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.197 [2024-12-09 04:16:16.554732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.197 [2024-12-09 04:16:16.554747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.197 [2024-12-09 04:16:16.554981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.197 [2024-12-09 04:16:16.555177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.197 [2024-12-09 04:16:16.555196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.197 [2024-12-09 04:16:16.555207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.197 [2024-12-09 04:16:16.555218] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.197 [2024-12-09 04:16:16.567551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.197 [2024-12-09 04:16:16.567905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.197 [2024-12-09 04:16:16.567947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.197 [2024-12-09 04:16:16.567962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.197 [2024-12-09 04:16:16.568213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.197 [2024-12-09 04:16:16.568475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.197 [2024-12-09 04:16:16.568497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.197 [2024-12-09 04:16:16.568511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.197 [2024-12-09 04:16:16.568523] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.197 [2024-12-09 04:16:16.580733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.197 [2024-12-09 04:16:16.581226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.197 [2024-12-09 04:16:16.581286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.197 [2024-12-09 04:16:16.581303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.197 [2024-12-09 04:16:16.581534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.197 [2024-12-09 04:16:16.581745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.197 [2024-12-09 04:16:16.581764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.197 [2024-12-09 04:16:16.581775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.197 [2024-12-09 04:16:16.581787] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.197 [2024-12-09 04:16:16.593791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.197 [2024-12-09 04:16:16.594269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.197 [2024-12-09 04:16:16.594327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.197 [2024-12-09 04:16:16.594343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.197 [2024-12-09 04:16:16.594607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.197 [2024-12-09 04:16:16.594803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.197 [2024-12-09 04:16:16.594826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.197 [2024-12-09 04:16:16.594839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.197 [2024-12-09 04:16:16.594850] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.197 [2024-12-09 04:16:16.606894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.197 [2024-12-09 04:16:16.607366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.197 [2024-12-09 04:16:16.607395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.197 [2024-12-09 04:16:16.607410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.197 [2024-12-09 04:16:16.607659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.197 [2024-12-09 04:16:16.607855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.197 [2024-12-09 04:16:16.607874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.197 [2024-12-09 04:16:16.607886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.197 [2024-12-09 04:16:16.607897] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.197 [2024-12-09 04:16:16.619962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.197 [2024-12-09 04:16:16.620332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.197 [2024-12-09 04:16:16.620375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.197 [2024-12-09 04:16:16.620391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.197 [2024-12-09 04:16:16.620641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.197 [2024-12-09 04:16:16.620854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.197 [2024-12-09 04:16:16.620872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.197 [2024-12-09 04:16:16.620885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.197 [2024-12-09 04:16:16.620896] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.197 [2024-12-09 04:16:16.633347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.197 [2024-12-09 04:16:16.633685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.197 [2024-12-09 04:16:16.633727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.198 [2024-12-09 04:16:16.633743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.198 [2024-12-09 04:16:16.633969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.198 [2024-12-09 04:16:16.634187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.198 [2024-12-09 04:16:16.634206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.198 [2024-12-09 04:16:16.634218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.198 [2024-12-09 04:16:16.634238] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.198 [2024-12-09 04:16:16.646444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.198 [2024-12-09 04:16:16.646811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.198 [2024-12-09 04:16:16.646853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.198 [2024-12-09 04:16:16.646870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.198 [2024-12-09 04:16:16.647139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.198 [2024-12-09 04:16:16.647380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.198 [2024-12-09 04:16:16.647408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.198 [2024-12-09 04:16:16.647421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.198 [2024-12-09 04:16:16.647433] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.198 [2024-12-09 04:16:16.659580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.198 [2024-12-09 04:16:16.659916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.198 [2024-12-09 04:16:16.659944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.198 [2024-12-09 04:16:16.659959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.198 [2024-12-09 04:16:16.660184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.198 [2024-12-09 04:16:16.660444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.198 [2024-12-09 04:16:16.660464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.198 [2024-12-09 04:16:16.660477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.198 [2024-12-09 04:16:16.660489] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.198 [2024-12-09 04:16:16.672666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.198 [2024-12-09 04:16:16.673072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.198 [2024-12-09 04:16:16.673100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.198 [2024-12-09 04:16:16.673116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.198 [2024-12-09 04:16:16.673349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.198 [2024-12-09 04:16:16.673557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.198 [2024-12-09 04:16:16.673576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.198 [2024-12-09 04:16:16.673589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.198 [2024-12-09 04:16:16.673601] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.198 [2024-12-09 04:16:16.685851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.198 [2024-12-09 04:16:16.686346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.198 [2024-12-09 04:16:16.686388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.198 [2024-12-09 04:16:16.686405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.198 [2024-12-09 04:16:16.686657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.198 [2024-12-09 04:16:16.686852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.198 [2024-12-09 04:16:16.686870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.198 [2024-12-09 04:16:16.686882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.198 [2024-12-09 04:16:16.686893] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.198 [2024-12-09 04:16:16.699218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.198 [2024-12-09 04:16:16.699583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.198 [2024-12-09 04:16:16.699612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.198 [2024-12-09 04:16:16.699628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.198 [2024-12-09 04:16:16.699889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.198 [2024-12-09 04:16:16.700115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.198 [2024-12-09 04:16:16.700135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.198 [2024-12-09 04:16:16.700148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.198 [2024-12-09 04:16:16.700176] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.198 [2024-12-09 04:16:16.712523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.198 [2024-12-09 04:16:16.712828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.198 [2024-12-09 04:16:16.712868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.198 [2024-12-09 04:16:16.712883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.198 [2024-12-09 04:16:16.713086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.198 [2024-12-09 04:16:16.713356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.198 [2024-12-09 04:16:16.713377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.198 [2024-12-09 04:16:16.713389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.198 [2024-12-09 04:16:16.713401] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.198 [2024-12-09 04:16:16.725700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.198 [2024-12-09 04:16:16.726066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.198 [2024-12-09 04:16:16.726108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.198 [2024-12-09 04:16:16.726124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.198 [2024-12-09 04:16:16.726399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.198 [2024-12-09 04:16:16.726608] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.198 [2024-12-09 04:16:16.726642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.198 [2024-12-09 04:16:16.726655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.199 [2024-12-09 04:16:16.726667] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.199 [2024-12-09 04:16:16.738824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.199 [2024-12-09 04:16:16.739319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.199 [2024-12-09 04:16:16.739362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.199 [2024-12-09 04:16:16.739379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.199 [2024-12-09 04:16:16.739620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.199 [2024-12-09 04:16:16.739837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.199 [2024-12-09 04:16:16.739856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.199 [2024-12-09 04:16:16.739868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.199 [2024-12-09 04:16:16.739880] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.199 [2024-12-09 04:16:16.751908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.199 [2024-12-09 04:16:16.752223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.199 [2024-12-09 04:16:16.752264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.199 [2024-12-09 04:16:16.752290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.199 [2024-12-09 04:16:16.752538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.199 [2024-12-09 04:16:16.752767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.199 [2024-12-09 04:16:16.752786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.199 [2024-12-09 04:16:16.752798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.199 [2024-12-09 04:16:16.752810] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.199 [2024-12-09 04:16:16.765078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.199 [2024-12-09 04:16:16.765473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.199 [2024-12-09 04:16:16.765524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.199 [2024-12-09 04:16:16.765540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.199 [2024-12-09 04:16:16.765766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.199 [2024-12-09 04:16:16.765978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.199 [2024-12-09 04:16:16.766002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.199 [2024-12-09 04:16:16.766015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.199 [2024-12-09 04:16:16.766026] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.458 [2024-12-09 04:16:16.778721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.458 [2024-12-09 04:16:16.779160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.458 [2024-12-09 04:16:16.779202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.458 [2024-12-09 04:16:16.779219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.458 [2024-12-09 04:16:16.779478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.458 [2024-12-09 04:16:16.779694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.458 [2024-12-09 04:16:16.779712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.458 [2024-12-09 04:16:16.779724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.458 [2024-12-09 04:16:16.779736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.458 [2024-12-09 04:16:16.792018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.458 [2024-12-09 04:16:16.792383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.458 [2024-12-09 04:16:16.792412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.458 [2024-12-09 04:16:16.792428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.458 [2024-12-09 04:16:16.792672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.458 [2024-12-09 04:16:16.792884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.458 [2024-12-09 04:16:16.792902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.458 [2024-12-09 04:16:16.792914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.458 [2024-12-09 04:16:16.792925] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.458 [2024-12-09 04:16:16.805649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.458 [2024-12-09 04:16:16.806065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.458 [2024-12-09 04:16:16.806124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.459 [2024-12-09 04:16:16.806139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.459 [2024-12-09 04:16:16.806396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.459 [2024-12-09 04:16:16.806633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.459 [2024-12-09 04:16:16.806651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.459 [2024-12-09 04:16:16.806663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.459 [2024-12-09 04:16:16.806679] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.459 [2024-12-09 04:16:16.818890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.459 [2024-12-09 04:16:16.819230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.459 [2024-12-09 04:16:16.819257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.459 [2024-12-09 04:16:16.819296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.459 [2024-12-09 04:16:16.819544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.459 [2024-12-09 04:16:16.819773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.459 [2024-12-09 04:16:16.819793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.459 [2024-12-09 04:16:16.819805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.459 [2024-12-09 04:16:16.819817] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.459 [2024-12-09 04:16:16.832207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.459 [2024-12-09 04:16:16.832714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.459 [2024-12-09 04:16:16.832765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.459 [2024-12-09 04:16:16.832780] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.459 [2024-12-09 04:16:16.833044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.459 [2024-12-09 04:16:16.833249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.459 [2024-12-09 04:16:16.833294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.459 [2024-12-09 04:16:16.833308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.459 [2024-12-09 04:16:16.833334] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.459 [2024-12-09 04:16:16.845538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.459 [2024-12-09 04:16:16.845977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.459 [2024-12-09 04:16:16.846044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.459 [2024-12-09 04:16:16.846060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.459 [2024-12-09 04:16:16.846303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.459 [2024-12-09 04:16:16.846519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.459 [2024-12-09 04:16:16.846538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.459 [2024-12-09 04:16:16.846550] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.459 [2024-12-09 04:16:16.846562] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.459 5300.25 IOPS, 20.70 MiB/s [2024-12-09T03:16:17.035Z] [2024-12-09 04:16:16.858863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.459 [2024-12-09 04:16:16.859234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.459 [2024-12-09 04:16:16.859285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.459 [2024-12-09 04:16:16.859303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.459 [2024-12-09 04:16:16.859541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.459 [2024-12-09 04:16:16.859755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.459 [2024-12-09 04:16:16.859773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.459 [2024-12-09 04:16:16.859785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.459 [2024-12-09 04:16:16.859796] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.459 [2024-12-09 04:16:16.872204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.459 [2024-12-09 04:16:16.872718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.459 [2024-12-09 04:16:16.872770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.459 [2024-12-09 04:16:16.872786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.459 [2024-12-09 04:16:16.873058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.459 [2024-12-09 04:16:16.873268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.459 [2024-12-09 04:16:16.873312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.459 [2024-12-09 04:16:16.873325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.459 [2024-12-09 04:16:16.873336] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.459 [2024-12-09 04:16:16.885501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.459 [2024-12-09 04:16:16.885959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.459 [2024-12-09 04:16:16.886010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.459 [2024-12-09 04:16:16.886025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.459 [2024-12-09 04:16:16.886301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.459 [2024-12-09 04:16:16.886517] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.459 [2024-12-09 04:16:16.886536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.459 [2024-12-09 04:16:16.886549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.459 [2024-12-09 04:16:16.886561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.459 [2024-12-09 04:16:16.899050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.459 [2024-12-09 04:16:16.899411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.459 [2024-12-09 04:16:16.899439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.459 [2024-12-09 04:16:16.899461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.459 [2024-12-09 04:16:16.899693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.460 [2024-12-09 04:16:16.899916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.460 [2024-12-09 04:16:16.899936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.460 [2024-12-09 04:16:16.899948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.460 [2024-12-09 04:16:16.899960] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.460 [2024-12-09 04:16:16.912658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.460 [2024-12-09 04:16:16.913077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.460 [2024-12-09 04:16:16.913129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.460 [2024-12-09 04:16:16.913145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.460 [2024-12-09 04:16:16.913397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.460 [2024-12-09 04:16:16.913630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.460 [2024-12-09 04:16:16.913649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.460 [2024-12-09 04:16:16.913660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.460 [2024-12-09 04:16:16.913671] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.460 [2024-12-09 04:16:16.926075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.460 [2024-12-09 04:16:16.926422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.460 [2024-12-09 04:16:16.926472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.460 [2024-12-09 04:16:16.926489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.460 [2024-12-09 04:16:16.926747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.460 [2024-12-09 04:16:16.926943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.460 [2024-12-09 04:16:16.926961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.460 [2024-12-09 04:16:16.926973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.460 [2024-12-09 04:16:16.926984] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.460 [2024-12-09 04:16:16.939376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.460 [2024-12-09 04:16:16.939867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.460 [2024-12-09 04:16:16.939895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.460 [2024-12-09 04:16:16.939911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.460 [2024-12-09 04:16:16.940154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.460 [2024-12-09 04:16:16.940405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.460 [2024-12-09 04:16:16.940431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.460 [2024-12-09 04:16:16.940446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.460 [2024-12-09 04:16:16.940458] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.460 [2024-12-09 04:16:16.952677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.460 [2024-12-09 04:16:16.953037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.460 [2024-12-09 04:16:16.953066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.460 [2024-12-09 04:16:16.953082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.460 [2024-12-09 04:16:16.953311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.460 [2024-12-09 04:16:16.953534] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.460 [2024-12-09 04:16:16.953554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.460 [2024-12-09 04:16:16.953568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.460 [2024-12-09 04:16:16.953581] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.460 [2024-12-09 04:16:16.966085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.460 [2024-12-09 04:16:16.966445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.460 [2024-12-09 04:16:16.966473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.460 [2024-12-09 04:16:16.966488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.460 [2024-12-09 04:16:16.966727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.460 [2024-12-09 04:16:16.966939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.460 [2024-12-09 04:16:16.966957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.460 [2024-12-09 04:16:16.966969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.460 [2024-12-09 04:16:16.966980] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.460 [2024-12-09 04:16:16.979170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.460 [2024-12-09 04:16:16.979696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.460 [2024-12-09 04:16:16.979723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.460 [2024-12-09 04:16:16.979754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.460 [2024-12-09 04:16:16.980007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.460 [2024-12-09 04:16:16.980218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.460 [2024-12-09 04:16:16.980236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.460 [2024-12-09 04:16:16.980262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.460 [2024-12-09 04:16:16.980288] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.460 [2024-12-09 04:16:16.992353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.460 [2024-12-09 04:16:16.992735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.460 [2024-12-09 04:16:16.992762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.460 [2024-12-09 04:16:16.992792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.460 [2024-12-09 04:16:16.993016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.460 [2024-12-09 04:16:16.993228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.460 [2024-12-09 04:16:16.993246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.460 [2024-12-09 04:16:16.993258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.461 [2024-12-09 04:16:16.993270] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.461 [2024-12-09 04:16:17.005477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.461 [2024-12-09 04:16:17.005814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.461 [2024-12-09 04:16:17.005841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.461 [2024-12-09 04:16:17.005856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.461 [2024-12-09 04:16:17.006080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.461 [2024-12-09 04:16:17.006317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.461 [2024-12-09 04:16:17.006337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.461 [2024-12-09 04:16:17.006364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.461 [2024-12-09 04:16:17.006376] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.461 [2024-12-09 04:16:17.018701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.461 [2024-12-09 04:16:17.019034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.461 [2024-12-09 04:16:17.019063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.461 [2024-12-09 04:16:17.019078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.461 [2024-12-09 04:16:17.019308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.461 [2024-12-09 04:16:17.019531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.461 [2024-12-09 04:16:17.019551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.461 [2024-12-09 04:16:17.019564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.461 [2024-12-09 04:16:17.019576] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.461 [2024-12-09 04:16:17.032445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.461 [2024-12-09 04:16:17.032801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.461 [2024-12-09 04:16:17.032829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.461 [2024-12-09 04:16:17.032845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.461 [2024-12-09 04:16:17.033078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.720 [2024-12-09 04:16:17.033349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.720 [2024-12-09 04:16:17.033371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.720 [2024-12-09 04:16:17.033400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.720 [2024-12-09 04:16:17.033412] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.720 [2024-12-09 04:16:17.045667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.720 [2024-12-09 04:16:17.045988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-12-09 04:16:17.046015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.720 [2024-12-09 04:16:17.046030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.720 [2024-12-09 04:16:17.046248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.720 [2024-12-09 04:16:17.046477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.720 [2024-12-09 04:16:17.046498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.720 [2024-12-09 04:16:17.046511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.720 [2024-12-09 04:16:17.046522] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.720 [2024-12-09 04:16:17.058813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.720 [2024-12-09 04:16:17.059180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-12-09 04:16:17.059221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.720 [2024-12-09 04:16:17.059237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.720 [2024-12-09 04:16:17.059518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.720 [2024-12-09 04:16:17.059731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.720 [2024-12-09 04:16:17.059750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.720 [2024-12-09 04:16:17.059762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.720 [2024-12-09 04:16:17.059774] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.720 [2024-12-09 04:16:17.071932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.720 [2024-12-09 04:16:17.072303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-12-09 04:16:17.072344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.720 [2024-12-09 04:16:17.072364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.720 [2024-12-09 04:16:17.072609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.720 [2024-12-09 04:16:17.072805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.720 [2024-12-09 04:16:17.072823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.720 [2024-12-09 04:16:17.072835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.720 [2024-12-09 04:16:17.072846] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.720 [2024-12-09 04:16:17.085094] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.720 [2024-12-09 04:16:17.085436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.720 [2024-12-09 04:16:17.085464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.721 [2024-12-09 04:16:17.085479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.721 [2024-12-09 04:16:17.085706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.721 [2024-12-09 04:16:17.085918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.721 [2024-12-09 04:16:17.085936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.721 [2024-12-09 04:16:17.085948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.721 [2024-12-09 04:16:17.085960] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.721 [2024-12-09 04:16:17.098251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.721 [2024-12-09 04:16:17.098622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-12-09 04:16:17.098649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.721 [2024-12-09 04:16:17.098665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.721 [2024-12-09 04:16:17.098904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.721 [2024-12-09 04:16:17.099133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.721 [2024-12-09 04:16:17.099152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.721 [2024-12-09 04:16:17.099164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.721 [2024-12-09 04:16:17.099176] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.721 [2024-12-09 04:16:17.111372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.721 [2024-12-09 04:16:17.111737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-12-09 04:16:17.111764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.721 [2024-12-09 04:16:17.111779] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.721 [2024-12-09 04:16:17.112017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.721 [2024-12-09 04:16:17.112220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.721 [2024-12-09 04:16:17.112243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.721 [2024-12-09 04:16:17.112256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.721 [2024-12-09 04:16:17.112267] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.721 [2024-12-09 04:16:17.124672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.721 [2024-12-09 04:16:17.125066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-12-09 04:16:17.125094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.721 [2024-12-09 04:16:17.125109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.721 [2024-12-09 04:16:17.125353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.721 [2024-12-09 04:16:17.125568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.721 [2024-12-09 04:16:17.125588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.721 [2024-12-09 04:16:17.125601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.721 [2024-12-09 04:16:17.125613] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.721 [2024-12-09 04:16:17.137864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.721 [2024-12-09 04:16:17.138291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-12-09 04:16:17.138336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.721 [2024-12-09 04:16:17.138352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.721 [2024-12-09 04:16:17.138597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.721 [2024-12-09 04:16:17.138809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.721 [2024-12-09 04:16:17.138828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.721 [2024-12-09 04:16:17.138839] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.721 [2024-12-09 04:16:17.138851] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.721 [2024-12-09 04:16:17.151089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.721 [2024-12-09 04:16:17.151461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-12-09 04:16:17.151504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.721 [2024-12-09 04:16:17.151520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.721 [2024-12-09 04:16:17.151789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.721 [2024-12-09 04:16:17.151985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.721 [2024-12-09 04:16:17.152004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.721 [2024-12-09 04:16:17.152015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.721 [2024-12-09 04:16:17.152031] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.721 [2024-12-09 04:16:17.164213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.721 [2024-12-09 04:16:17.164734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-12-09 04:16:17.164777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.721 [2024-12-09 04:16:17.164793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.721 [2024-12-09 04:16:17.165061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.721 [2024-12-09 04:16:17.165257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.721 [2024-12-09 04:16:17.165299] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.721 [2024-12-09 04:16:17.165314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.721 [2024-12-09 04:16:17.165325] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.721 [2024-12-09 04:16:17.177345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.721 [2024-12-09 04:16:17.177735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.721 [2024-12-09 04:16:17.177776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.721 [2024-12-09 04:16:17.177792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.721 [2024-12-09 04:16:17.178017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.721 [2024-12-09 04:16:17.178228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.722 [2024-12-09 04:16:17.178247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.722 [2024-12-09 04:16:17.178259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.722 [2024-12-09 04:16:17.178279] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.722 [2024-12-09 04:16:17.190467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.722 [2024-12-09 04:16:17.190963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-12-09 04:16:17.190989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.722 [2024-12-09 04:16:17.191021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.722 [2024-12-09 04:16:17.191298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.722 [2024-12-09 04:16:17.191520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.722 [2024-12-09 04:16:17.191540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.722 [2024-12-09 04:16:17.191553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.722 [2024-12-09 04:16:17.191565] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.722 [2024-12-09 04:16:17.203648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.722 [2024-12-09 04:16:17.204034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-12-09 04:16:17.204062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.722 [2024-12-09 04:16:17.204078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.722 [2024-12-09 04:16:17.204336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.722 [2024-12-09 04:16:17.204559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.722 [2024-12-09 04:16:17.204579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.722 [2024-12-09 04:16:17.204593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.722 [2024-12-09 04:16:17.204606] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.722 [2024-12-09 04:16:17.216988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.722 [2024-12-09 04:16:17.217377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-12-09 04:16:17.217405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.722 [2024-12-09 04:16:17.217420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.722 [2024-12-09 04:16:17.217646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.722 [2024-12-09 04:16:17.217842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.722 [2024-12-09 04:16:17.217860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.722 [2024-12-09 04:16:17.217872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.722 [2024-12-09 04:16:17.217883] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.722 [2024-12-09 04:16:17.230175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.722 [2024-12-09 04:16:17.230697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-12-09 04:16:17.230740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.722 [2024-12-09 04:16:17.230756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.722 [2024-12-09 04:16:17.231005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.722 [2024-12-09 04:16:17.231200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.722 [2024-12-09 04:16:17.231218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.722 [2024-12-09 04:16:17.231230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.722 [2024-12-09 04:16:17.231241] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.722 [2024-12-09 04:16:17.243325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.722 [2024-12-09 04:16:17.243734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-12-09 04:16:17.243776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.722 [2024-12-09 04:16:17.243797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.722 [2024-12-09 04:16:17.244039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.722 [2024-12-09 04:16:17.244234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.722 [2024-12-09 04:16:17.244253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.722 [2024-12-09 04:16:17.244265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.722 [2024-12-09 04:16:17.244301] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.722 [2024-12-09 04:16:17.256454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.722 [2024-12-09 04:16:17.256945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-12-09 04:16:17.256986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.722 [2024-12-09 04:16:17.257002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.722 [2024-12-09 04:16:17.257256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.722 [2024-12-09 04:16:17.257497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.722 [2024-12-09 04:16:17.257516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.722 [2024-12-09 04:16:17.257528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.722 [2024-12-09 04:16:17.257540] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.722 [2024-12-09 04:16:17.269517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.722 [2024-12-09 04:16:17.269848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-12-09 04:16:17.269876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.722 [2024-12-09 04:16:17.269891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.722 [2024-12-09 04:16:17.270115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.722 [2024-12-09 04:16:17.270353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.722 [2024-12-09 04:16:17.270373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.722 [2024-12-09 04:16:17.270386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.722 [2024-12-09 04:16:17.270398] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.722 [2024-12-09 04:16:17.282577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.722 [2024-12-09 04:16:17.282889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.722 [2024-12-09 04:16:17.282930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.722 [2024-12-09 04:16:17.282945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.722 [2024-12-09 04:16:17.283163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.722 [2024-12-09 04:16:17.283421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.723 [2024-12-09 04:16:17.283449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.723 [2024-12-09 04:16:17.283462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.723 [2024-12-09 04:16:17.283474] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.982 [2024-12-09 04:16:17.296422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.982 [2024-12-09 04:16:17.296800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.982 [2024-12-09 04:16:17.296827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.982 [2024-12-09 04:16:17.296843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.982 [2024-12-09 04:16:17.297068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.982 [2024-12-09 04:16:17.297306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.982 [2024-12-09 04:16:17.297342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.982 [2024-12-09 04:16:17.297355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.982 [2024-12-09 04:16:17.297367] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.982 [2024-12-09 04:16:17.309594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.982 [2024-12-09 04:16:17.310084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.982 [2024-12-09 04:16:17.310125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.982 [2024-12-09 04:16:17.310141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.982 [2024-12-09 04:16:17.310417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.982 [2024-12-09 04:16:17.310626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.982 [2024-12-09 04:16:17.310646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.982 [2024-12-09 04:16:17.310659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.982 [2024-12-09 04:16:17.310671] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.982 [2024-12-09 04:16:17.322672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.982 [2024-12-09 04:16:17.323164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.982 [2024-12-09 04:16:17.323205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.982 [2024-12-09 04:16:17.323222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.982 [2024-12-09 04:16:17.323474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.982 [2024-12-09 04:16:17.323705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.982 [2024-12-09 04:16:17.323724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.982 [2024-12-09 04:16:17.323736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.982 [2024-12-09 04:16:17.323752] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.982 [2024-12-09 04:16:17.335904] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.982 [2024-12-09 04:16:17.336277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.982 [2024-12-09 04:16:17.336306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.982 [2024-12-09 04:16:17.336322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.982 [2024-12-09 04:16:17.336567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.982 [2024-12-09 04:16:17.336778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.982 [2024-12-09 04:16:17.336797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.982 [2024-12-09 04:16:17.336809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.982 [2024-12-09 04:16:17.336820] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.982 [2024-12-09 04:16:17.349052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.982 [2024-12-09 04:16:17.349465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.982 [2024-12-09 04:16:17.349506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.982 [2024-12-09 04:16:17.349522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.982 [2024-12-09 04:16:17.349761] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.982 [2024-12-09 04:16:17.349956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.982 [2024-12-09 04:16:17.349975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.982 [2024-12-09 04:16:17.349986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.982 [2024-12-09 04:16:17.349997] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.982 [2024-12-09 04:16:17.362130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.982 [2024-12-09 04:16:17.362625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.982 [2024-12-09 04:16:17.362666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.982 [2024-12-09 04:16:17.362682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.982 [2024-12-09 04:16:17.362929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.982 [2024-12-09 04:16:17.363125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.982 [2024-12-09 04:16:17.363143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.983 [2024-12-09 04:16:17.363155] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.983 [2024-12-09 04:16:17.363166] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.983 [2024-12-09 04:16:17.375331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.983 [2024-12-09 04:16:17.375742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.983 [2024-12-09 04:16:17.375785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.983 [2024-12-09 04:16:17.375801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.983 [2024-12-09 04:16:17.376056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.983 [2024-12-09 04:16:17.376266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.983 [2024-12-09 04:16:17.376309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.983 [2024-12-09 04:16:17.376322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.983 [2024-12-09 04:16:17.376334] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.983 [2024-12-09 04:16:17.388649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.983 [2024-12-09 04:16:17.389052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.983 [2024-12-09 04:16:17.389119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.983 [2024-12-09 04:16:17.389135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.983 [2024-12-09 04:16:17.389386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.983 [2024-12-09 04:16:17.389624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.983 [2024-12-09 04:16:17.389656] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.983 [2024-12-09 04:16:17.389669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.983 [2024-12-09 04:16:17.389681] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.983 [2024-12-09 04:16:17.402184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.983 [2024-12-09 04:16:17.402566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.983 [2024-12-09 04:16:17.402595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.983 [2024-12-09 04:16:17.402611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.983 [2024-12-09 04:16:17.402856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.983 [2024-12-09 04:16:17.403073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.983 [2024-12-09 04:16:17.403092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.983 [2024-12-09 04:16:17.403104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.983 [2024-12-09 04:16:17.403115] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.983 [2024-12-09 04:16:17.415377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.983 [2024-12-09 04:16:17.415769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.983 [2024-12-09 04:16:17.415812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.983 [2024-12-09 04:16:17.415829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.983 [2024-12-09 04:16:17.416067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.983 [2024-12-09 04:16:17.416311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.983 [2024-12-09 04:16:17.416331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.983 [2024-12-09 04:16:17.416344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.983 [2024-12-09 04:16:17.416356] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.983 [2024-12-09 04:16:17.429000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.983 [2024-12-09 04:16:17.429343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.983 [2024-12-09 04:16:17.429373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.983 [2024-12-09 04:16:17.429389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.983 [2024-12-09 04:16:17.429621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.983 [2024-12-09 04:16:17.429840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.983 [2024-12-09 04:16:17.429859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.983 [2024-12-09 04:16:17.429871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.983 [2024-12-09 04:16:17.429883] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.983 [2024-12-09 04:16:17.442355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.983 [2024-12-09 04:16:17.442805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.983 [2024-12-09 04:16:17.442846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.983 [2024-12-09 04:16:17.442863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.983 [2024-12-09 04:16:17.443106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.983 [2024-12-09 04:16:17.443350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.983 [2024-12-09 04:16:17.443371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.983 [2024-12-09 04:16:17.443385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.983 [2024-12-09 04:16:17.443397] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.983 [2024-12-09 04:16:17.455752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.983 [2024-12-09 04:16:17.456130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.983 [2024-12-09 04:16:17.456167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.983 [2024-12-09 04:16:17.456201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.983 [2024-12-09 04:16:17.456430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.983 [2024-12-09 04:16:17.456681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.983 [2024-12-09 04:16:17.456708] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.983 [2024-12-09 04:16:17.456722] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.983 [2024-12-09 04:16:17.456736] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.983 [2024-12-09 04:16:17.469036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.983 [2024-12-09 04:16:17.469373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.983 [2024-12-09 04:16:17.469400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.983 [2024-12-09 04:16:17.469416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.983 [2024-12-09 04:16:17.469641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.983 [2024-12-09 04:16:17.469860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.983 [2024-12-09 04:16:17.469879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.983 [2024-12-09 04:16:17.469891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.983 [2024-12-09 04:16:17.469903] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.983 [2024-12-09 04:16:17.482606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.983 [2024-12-09 04:16:17.482997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.983 [2024-12-09 04:16:17.483025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.983 [2024-12-09 04:16:17.483041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.983 [2024-12-09 04:16:17.483294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.983 [2024-12-09 04:16:17.483496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.983 [2024-12-09 04:16:17.483515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.983 [2024-12-09 04:16:17.483527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.983 [2024-12-09 04:16:17.483538] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.983 [2024-12-09 04:16:17.495971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.983 [2024-12-09 04:16:17.496363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.983 [2024-12-09 04:16:17.496393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.983 [2024-12-09 04:16:17.496409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.984 [2024-12-09 04:16:17.496640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.984 [2024-12-09 04:16:17.496851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.984 [2024-12-09 04:16:17.496870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.984 [2024-12-09 04:16:17.496882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.984 [2024-12-09 04:16:17.496898] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.984 [2024-12-09 04:16:17.509345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.984 [2024-12-09 04:16:17.509759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.984 [2024-12-09 04:16:17.509802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.984 [2024-12-09 04:16:17.509819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.984 [2024-12-09 04:16:17.510088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.984 [2024-12-09 04:16:17.510311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.984 [2024-12-09 04:16:17.510348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.984 [2024-12-09 04:16:17.510361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.984 [2024-12-09 04:16:17.510373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.984 [2024-12-09 04:16:17.522670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.984 [2024-12-09 04:16:17.523041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.984 [2024-12-09 04:16:17.523069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.984 [2024-12-09 04:16:17.523085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.984 [2024-12-09 04:16:17.523340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.984 [2024-12-09 04:16:17.523548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.984 [2024-12-09 04:16:17.523567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.984 [2024-12-09 04:16:17.523580] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.984 [2024-12-09 04:16:17.523607] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.984 [2024-12-09 04:16:17.535937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.984 [2024-12-09 04:16:17.536276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.984 [2024-12-09 04:16:17.536304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.984 [2024-12-09 04:16:17.536320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.984 [2024-12-09 04:16:17.536545] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.984 [2024-12-09 04:16:17.536756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.984 [2024-12-09 04:16:17.536774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.984 [2024-12-09 04:16:17.536786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.984 [2024-12-09 04:16:17.536797] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:48.984 [2024-12-09 04:16:17.548968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:48.984 [2024-12-09 04:16:17.549368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:48.984 [2024-12-09 04:16:17.549396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:48.984 [2024-12-09 04:16:17.549412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:48.984 [2024-12-09 04:16:17.549637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:48.984 [2024-12-09 04:16:17.549866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:48.984 [2024-12-09 04:16:17.549884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:48.984 [2024-12-09 04:16:17.549897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:48.984 [2024-12-09 04:16:17.549908] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.243 [2024-12-09 04:16:17.562297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.244 [2024-12-09 04:16:17.562734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-12-09 04:16:17.562761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.244 [2024-12-09 04:16:17.562791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.244 [2024-12-09 04:16:17.563024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.244 [2024-12-09 04:16:17.563261] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.244 [2024-12-09 04:16:17.563305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.244 [2024-12-09 04:16:17.563319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.244 [2024-12-09 04:16:17.563331] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.244 [2024-12-09 04:16:17.575873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.244 [2024-12-09 04:16:17.576186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-12-09 04:16:17.576212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.244 [2024-12-09 04:16:17.576227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.244 [2024-12-09 04:16:17.576482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.244 [2024-12-09 04:16:17.576719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.244 [2024-12-09 04:16:17.576737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.244 [2024-12-09 04:16:17.576750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.244 [2024-12-09 04:16:17.576761] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.244 [2024-12-09 04:16:17.589028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.244 [2024-12-09 04:16:17.589406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-12-09 04:16:17.589448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.244 [2024-12-09 04:16:17.589463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.244 [2024-12-09 04:16:17.589719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.244 [2024-12-09 04:16:17.589930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.244 [2024-12-09 04:16:17.589948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.244 [2024-12-09 04:16:17.589960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.244 [2024-12-09 04:16:17.589972] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.244 [2024-12-09 04:16:17.602243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.244 [2024-12-09 04:16:17.602581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-12-09 04:16:17.602608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.244 [2024-12-09 04:16:17.602624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.244 [2024-12-09 04:16:17.602850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.244 [2024-12-09 04:16:17.603062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.244 [2024-12-09 04:16:17.603080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.244 [2024-12-09 04:16:17.603092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.244 [2024-12-09 04:16:17.603103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.244 [2024-12-09 04:16:17.615326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.244 [2024-12-09 04:16:17.615660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-12-09 04:16:17.615687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.244 [2024-12-09 04:16:17.615703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.244 [2024-12-09 04:16:17.615928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.244 [2024-12-09 04:16:17.616141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.244 [2024-12-09 04:16:17.616160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.244 [2024-12-09 04:16:17.616172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.244 [2024-12-09 04:16:17.616183] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.244 [2024-12-09 04:16:17.628470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.244 [2024-12-09 04:16:17.628874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-12-09 04:16:17.628916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.244 [2024-12-09 04:16:17.628933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.244 [2024-12-09 04:16:17.629164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.244 [2024-12-09 04:16:17.629426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.244 [2024-12-09 04:16:17.629453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.244 [2024-12-09 04:16:17.629467] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.244 [2024-12-09 04:16:17.629479] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.244 [2024-12-09 04:16:17.641626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.244 [2024-12-09 04:16:17.641986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-12-09 04:16:17.642028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.244 [2024-12-09 04:16:17.642043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.244 [2024-12-09 04:16:17.642297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.244 [2024-12-09 04:16:17.642513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.244 [2024-12-09 04:16:17.642532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.244 [2024-12-09 04:16:17.642544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.244 [2024-12-09 04:16:17.642556] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.244 [2024-12-09 04:16:17.654662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.244 [2024-12-09 04:16:17.655028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-12-09 04:16:17.655070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.244 [2024-12-09 04:16:17.655086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.244 [2024-12-09 04:16:17.655370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.244 [2024-12-09 04:16:17.655607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.244 [2024-12-09 04:16:17.655628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.244 [2024-12-09 04:16:17.655658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.244 [2024-12-09 04:16:17.655670] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.244 [2024-12-09 04:16:17.667777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.244 [2024-12-09 04:16:17.668175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-12-09 04:16:17.668242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.244 [2024-12-09 04:16:17.668257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.244 [2024-12-09 04:16:17.668516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.244 [2024-12-09 04:16:17.668746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.244 [2024-12-09 04:16:17.668765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.244 [2024-12-09 04:16:17.668777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.244 [2024-12-09 04:16:17.668792] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.244 [2024-12-09 04:16:17.680841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.244 [2024-12-09 04:16:17.681138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.244 [2024-12-09 04:16:17.681164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.244 [2024-12-09 04:16:17.681179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.244 [2024-12-09 04:16:17.681420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.244 [2024-12-09 04:16:17.681655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.244 [2024-12-09 04:16:17.681674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.245 [2024-12-09 04:16:17.681686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.245 [2024-12-09 04:16:17.681698] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.245 [2024-12-09 04:16:17.694042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.245 [2024-12-09 04:16:17.694434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-12-09 04:16:17.694476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.245 [2024-12-09 04:16:17.694492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.245 [2024-12-09 04:16:17.694743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.245 [2024-12-09 04:16:17.694938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.245 [2024-12-09 04:16:17.694957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.245 [2024-12-09 04:16:17.694969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.245 [2024-12-09 04:16:17.694980] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.245 [2024-12-09 04:16:17.707206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.245 [2024-12-09 04:16:17.707565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-12-09 04:16:17.707593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.245 [2024-12-09 04:16:17.707609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.245 [2024-12-09 04:16:17.707840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.245 [2024-12-09 04:16:17.708097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.245 [2024-12-09 04:16:17.708118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.245 [2024-12-09 04:16:17.708132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.245 [2024-12-09 04:16:17.708145] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.245 [2024-12-09 04:16:17.720711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.245 [2024-12-09 04:16:17.721146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-12-09 04:16:17.721189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.245 [2024-12-09 04:16:17.721206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.245 [2024-12-09 04:16:17.721461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.245 [2024-12-09 04:16:17.721678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.245 [2024-12-09 04:16:17.721696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.245 [2024-12-09 04:16:17.721708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.245 [2024-12-09 04:16:17.721720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.245 [2024-12-09 04:16:17.733876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.245 [2024-12-09 04:16:17.734236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-12-09 04:16:17.734263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.245 [2024-12-09 04:16:17.734304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.245 [2024-12-09 04:16:17.734549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.245 [2024-12-09 04:16:17.734777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.245 [2024-12-09 04:16:17.734796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.245 [2024-12-09 04:16:17.734808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.245 [2024-12-09 04:16:17.734819] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.245 [2024-12-09 04:16:17.747043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.245 [2024-12-09 04:16:17.747456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-12-09 04:16:17.747484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.245 [2024-12-09 04:16:17.747500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.245 [2024-12-09 04:16:17.747746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.245 [2024-12-09 04:16:17.747942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.245 [2024-12-09 04:16:17.747960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.245 [2024-12-09 04:16:17.747972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.245 [2024-12-09 04:16:17.747983] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.245 [2024-12-09 04:16:17.760104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.245 [2024-12-09 04:16:17.760537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-12-09 04:16:17.760564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.245 [2024-12-09 04:16:17.760580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.245 [2024-12-09 04:16:17.760822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.245 [2024-12-09 04:16:17.761034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.245 [2024-12-09 04:16:17.761053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.245 [2024-12-09 04:16:17.761065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.245 [2024-12-09 04:16:17.761075] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.245 [2024-12-09 04:16:17.773186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.245 [2024-12-09 04:16:17.773522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-12-09 04:16:17.773548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.245 [2024-12-09 04:16:17.773563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.245 [2024-12-09 04:16:17.773775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.245 [2024-12-09 04:16:17.773985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.245 [2024-12-09 04:16:17.774003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.245 [2024-12-09 04:16:17.774015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.245 [2024-12-09 04:16:17.774027] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.245 [2024-12-09 04:16:17.786359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.245 [2024-12-09 04:16:17.786721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-12-09 04:16:17.786748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.245 [2024-12-09 04:16:17.786764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.245 [2024-12-09 04:16:17.786992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.245 [2024-12-09 04:16:17.787204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.245 [2024-12-09 04:16:17.787223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.245 [2024-12-09 04:16:17.787234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.245 [2024-12-09 04:16:17.787245] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.245 [2024-12-09 04:16:17.799416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.245 [2024-12-09 04:16:17.799809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-12-09 04:16:17.799837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.245 [2024-12-09 04:16:17.799853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.245 [2024-12-09 04:16:17.800098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.245 [2024-12-09 04:16:17.800336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.245 [2024-12-09 04:16:17.800361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.245 [2024-12-09 04:16:17.800374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.245 [2024-12-09 04:16:17.800386] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.245 [2024-12-09 04:16:17.812516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.245 [2024-12-09 04:16:17.812967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.245 [2024-12-09 04:16:17.813009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.245 [2024-12-09 04:16:17.813025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.245 [2024-12-09 04:16:17.813262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.245 [2024-12-09 04:16:17.813503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.246 [2024-12-09 04:16:17.813522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.246 [2024-12-09 04:16:17.813535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.246 [2024-12-09 04:16:17.813546] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.504 [2024-12-09 04:16:17.826182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.504 [2024-12-09 04:16:17.826702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.504 [2024-12-09 04:16:17.826744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.504 [2024-12-09 04:16:17.826761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.504 [2024-12-09 04:16:17.827010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.504 [2024-12-09 04:16:17.827205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.504 [2024-12-09 04:16:17.827224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.504 [2024-12-09 04:16:17.827236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.504 [2024-12-09 04:16:17.827247] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.504 [2024-12-09 04:16:17.839255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.504 [2024-12-09 04:16:17.839600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.504 [2024-12-09 04:16:17.839627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.504 [2024-12-09 04:16:17.839642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.504 [2024-12-09 04:16:17.839845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.504 [2024-12-09 04:16:17.840057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.504 [2024-12-09 04:16:17.840075] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.504 [2024-12-09 04:16:17.840087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.504 [2024-12-09 04:16:17.840103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.504 4240.20 IOPS, 16.56 MiB/s [2024-12-09T03:16:18.080Z] [2024-12-09 04:16:17.853635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.505 [2024-12-09 04:16:17.853938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.505 [2024-12-09 04:16:17.853979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.505 [2024-12-09 04:16:17.853994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.505 [2024-12-09 04:16:17.854213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.505 [2024-12-09 04:16:17.854452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.505 [2024-12-09 04:16:17.854472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.505 [2024-12-09 04:16:17.854484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.505 [2024-12-09 04:16:17.854495] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.505 [2024-12-09 04:16:17.866856] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.505 [2024-12-09 04:16:17.867224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.505 [2024-12-09 04:16:17.867266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.505 [2024-12-09 04:16:17.867290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.505 [2024-12-09 04:16:17.867537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.505 [2024-12-09 04:16:17.867769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.505 [2024-12-09 04:16:17.867787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.505 [2024-12-09 04:16:17.867799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.505 [2024-12-09 04:16:17.867810] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.505 [2024-12-09 04:16:17.880069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.505 [2024-12-09 04:16:17.880490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.505 [2024-12-09 04:16:17.880517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.505 [2024-12-09 04:16:17.880532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.505 [2024-12-09 04:16:17.880774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.505 [2024-12-09 04:16:17.880984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.505 [2024-12-09 04:16:17.881003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.505 [2024-12-09 04:16:17.881015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.505 [2024-12-09 04:16:17.881026] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.505 [2024-12-09 04:16:17.893233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.505 [2024-12-09 04:16:17.893611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.505 [2024-12-09 04:16:17.893654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.505 [2024-12-09 04:16:17.893669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.505 [2024-12-09 04:16:17.893939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.505 [2024-12-09 04:16:17.894135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.505 [2024-12-09 04:16:17.894153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.505 [2024-12-09 04:16:17.894166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.505 [2024-12-09 04:16:17.894177] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.505 [2024-12-09 04:16:17.906395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.505 [2024-12-09 04:16:17.906769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.505 [2024-12-09 04:16:17.906796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.505 [2024-12-09 04:16:17.906812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.505 [2024-12-09 04:16:17.907049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.505 [2024-12-09 04:16:17.907260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.505 [2024-12-09 04:16:17.907302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.505 [2024-12-09 04:16:17.907317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.505 [2024-12-09 04:16:17.907329] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.505 [2024-12-09 04:16:17.920003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.505 [2024-12-09 04:16:17.920388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.505 [2024-12-09 04:16:17.920417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.505 [2024-12-09 04:16:17.920433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.505 [2024-12-09 04:16:17.920678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.505 [2024-12-09 04:16:17.920907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.505 [2024-12-09 04:16:17.920925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.505 [2024-12-09 04:16:17.920937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.505 [2024-12-09 04:16:17.920948] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.505 [2024-12-09 04:16:17.933360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.505 [2024-12-09 04:16:17.933745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.505 [2024-12-09 04:16:17.933787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.505 [2024-12-09 04:16:17.933810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.505 [2024-12-09 04:16:17.934057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.505 [2024-12-09 04:16:17.934268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.505 [2024-12-09 04:16:17.934297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.505 [2024-12-09 04:16:17.934310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.505 [2024-12-09 04:16:17.934338] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.505 [2024-12-09 04:16:17.946989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.505 [2024-12-09 04:16:17.947387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.505 [2024-12-09 04:16:17.947433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.505 [2024-12-09 04:16:17.947450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.505 [2024-12-09 04:16:17.947697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.505 [2024-12-09 04:16:17.947917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.505 [2024-12-09 04:16:17.947938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.505 [2024-12-09 04:16:17.947951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.505 [2024-12-09 04:16:17.947962] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.505 [2024-12-09 04:16:17.960290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.505 [2024-12-09 04:16:17.960708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.505 [2024-12-09 04:16:17.960735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.505 [2024-12-09 04:16:17.960766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.505 [2024-12-09 04:16:17.961012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.505 [2024-12-09 04:16:17.961282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.505 [2024-12-09 04:16:17.961304] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.505 [2024-12-09 04:16:17.961318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.505 [2024-12-09 04:16:17.961331] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.505 [2024-12-09 04:16:17.973613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.505 [2024-12-09 04:16:17.973960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.505 [2024-12-09 04:16:17.973997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.505 [2024-12-09 04:16:17.974013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.505 [2024-12-09 04:16:17.974237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.505 [2024-12-09 04:16:17.974469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.505 [2024-12-09 04:16:17.974489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.505 [2024-12-09 04:16:17.974501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.505 [2024-12-09 04:16:17.974513] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.506 [2024-12-09 04:16:17.986891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.506 [2024-12-09 04:16:17.987324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.506 [2024-12-09 04:16:17.987354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.506 [2024-12-09 04:16:17.987369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.506 [2024-12-09 04:16:17.987602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.506 [2024-12-09 04:16:17.987815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.506 [2024-12-09 04:16:17.987834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.506 [2024-12-09 04:16:17.987846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.506 [2024-12-09 04:16:17.987857] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.506 [2024-12-09 04:16:18.000187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.506 [2024-12-09 04:16:18.000592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.506 [2024-12-09 04:16:18.000643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.506 [2024-12-09 04:16:18.000658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.506 [2024-12-09 04:16:18.000893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.506 [2024-12-09 04:16:18.001122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.506 [2024-12-09 04:16:18.001141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.506 [2024-12-09 04:16:18.001154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.506 [2024-12-09 04:16:18.001165] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.506 [2024-12-09 04:16:18.013485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.506 [2024-12-09 04:16:18.013874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.506 [2024-12-09 04:16:18.013916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.506 [2024-12-09 04:16:18.013932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.506 [2024-12-09 04:16:18.014186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.506 [2024-12-09 04:16:18.014428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.506 [2024-12-09 04:16:18.014448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.506 [2024-12-09 04:16:18.014461] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.506 [2024-12-09 04:16:18.014477] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.506 [2024-12-09 04:16:18.026789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.506 [2024-12-09 04:16:18.027220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.506 [2024-12-09 04:16:18.027262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.506 [2024-12-09 04:16:18.027290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.506 [2024-12-09 04:16:18.027523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.506 [2024-12-09 04:16:18.027770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.506 [2024-12-09 04:16:18.027789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.506 [2024-12-09 04:16:18.027802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.506 [2024-12-09 04:16:18.027814] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.506 [2024-12-09 04:16:18.040085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.506 [2024-12-09 04:16:18.040526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.506 [2024-12-09 04:16:18.040578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.506 [2024-12-09 04:16:18.040594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.506 [2024-12-09 04:16:18.040858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.506 [2024-12-09 04:16:18.041053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.506 [2024-12-09 04:16:18.041072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.506 [2024-12-09 04:16:18.041084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.506 [2024-12-09 04:16:18.041095] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.506 [2024-12-09 04:16:18.053374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.506 [2024-12-09 04:16:18.053867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.506 [2024-12-09 04:16:18.053920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.506 [2024-12-09 04:16:18.053936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.506 [2024-12-09 04:16:18.054201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.506 [2024-12-09 04:16:18.054427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.506 [2024-12-09 04:16:18.054447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.506 [2024-12-09 04:16:18.054460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.506 [2024-12-09 04:16:18.054471] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.506 [2024-12-09 04:16:18.066678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.506 [2024-12-09 04:16:18.067082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.506 [2024-12-09 04:16:18.067108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.506 [2024-12-09 04:16:18.067123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.506 [2024-12-09 04:16:18.067382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.506 [2024-12-09 04:16:18.067603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.506 [2024-12-09 04:16:18.067622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.506 [2024-12-09 04:16:18.067649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.506 [2024-12-09 04:16:18.067660] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.765 [2024-12-09 04:16:18.080437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.765 [2024-12-09 04:16:18.080834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.765 [2024-12-09 04:16:18.080886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.765 [2024-12-09 04:16:18.080921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.765 [2024-12-09 04:16:18.081176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.765 [2024-12-09 04:16:18.081401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.765 [2024-12-09 04:16:18.081429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.765 [2024-12-09 04:16:18.081441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.765 [2024-12-09 04:16:18.081453] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.765 [2024-12-09 04:16:18.093595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.765 [2024-12-09 04:16:18.093962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.765 [2024-12-09 04:16:18.094005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.765 [2024-12-09 04:16:18.094021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.765 [2024-12-09 04:16:18.094300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.765 [2024-12-09 04:16:18.094525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.765 [2024-12-09 04:16:18.094545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.765 [2024-12-09 04:16:18.094558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.765 [2024-12-09 04:16:18.094570] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.765 [2024-12-09 04:16:18.106645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.765 [2024-12-09 04:16:18.107021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.765 [2024-12-09 04:16:18.107064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.765 [2024-12-09 04:16:18.107086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.765 [2024-12-09 04:16:18.107369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.765 [2024-12-09 04:16:18.107578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.765 [2024-12-09 04:16:18.107597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.765 [2024-12-09 04:16:18.107610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.765 [2024-12-09 04:16:18.107622] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.765 [2024-12-09 04:16:18.119842] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.765 [2024-12-09 04:16:18.120258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.765 [2024-12-09 04:16:18.120316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.765 [2024-12-09 04:16:18.120331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.765 [2024-12-09 04:16:18.120578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.765 [2024-12-09 04:16:18.120774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.765 [2024-12-09 04:16:18.120792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.765 [2024-12-09 04:16:18.120804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.765 [2024-12-09 04:16:18.120815] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.765 [2024-12-09 04:16:18.132987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.765 [2024-12-09 04:16:18.133383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.766 [2024-12-09 04:16:18.133411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.766 [2024-12-09 04:16:18.133427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.766 [2024-12-09 04:16:18.133652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.766 [2024-12-09 04:16:18.133881] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.766 [2024-12-09 04:16:18.133900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.766 [2024-12-09 04:16:18.133912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.766 [2024-12-09 04:16:18.133923] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.766 [2024-12-09 04:16:18.146134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.766 [2024-12-09 04:16:18.146570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.766 [2024-12-09 04:16:18.146612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.766 [2024-12-09 04:16:18.146628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.766 [2024-12-09 04:16:18.146871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.766 [2024-12-09 04:16:18.147086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.766 [2024-12-09 04:16:18.147105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.766 [2024-12-09 04:16:18.147117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.766 [2024-12-09 04:16:18.147128] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.766 [2024-12-09 04:16:18.159490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.766 [2024-12-09 04:16:18.159851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.766 [2024-12-09 04:16:18.159879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.766 [2024-12-09 04:16:18.159895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.766 [2024-12-09 04:16:18.160127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.766 [2024-12-09 04:16:18.160392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.766 [2024-12-09 04:16:18.160413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.766 [2024-12-09 04:16:18.160426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.766 [2024-12-09 04:16:18.160438] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.766 [2024-12-09 04:16:18.172514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.766 [2024-12-09 04:16:18.172914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.766 [2024-12-09 04:16:18.172941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.766 [2024-12-09 04:16:18.172956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.766 [2024-12-09 04:16:18.173182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.766 [2024-12-09 04:16:18.173439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.766 [2024-12-09 04:16:18.173459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.766 [2024-12-09 04:16:18.173471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.766 [2024-12-09 04:16:18.173483] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.766 [2024-12-09 04:16:18.185587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.766 [2024-12-09 04:16:18.186078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.766 [2024-12-09 04:16:18.186120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.766 [2024-12-09 04:16:18.186137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.766 [2024-12-09 04:16:18.186410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.766 [2024-12-09 04:16:18.186650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.766 [2024-12-09 04:16:18.186669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.766 [2024-12-09 04:16:18.186680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.766 [2024-12-09 04:16:18.186696] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.766 [2024-12-09 04:16:18.198683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.766 [2024-12-09 04:16:18.199177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.766 [2024-12-09 04:16:18.199203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.766 [2024-12-09 04:16:18.199233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.766 [2024-12-09 04:16:18.199460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.766 [2024-12-09 04:16:18.199702] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.766 [2024-12-09 04:16:18.199720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.766 [2024-12-09 04:16:18.199732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.766 [2024-12-09 04:16:18.199744] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.766 [2024-12-09 04:16:18.211854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.766 [2024-12-09 04:16:18.212258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.766 [2024-12-09 04:16:18.212308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.766 [2024-12-09 04:16:18.212324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.766 [2024-12-09 04:16:18.212556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.766 [2024-12-09 04:16:18.212819] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.766 [2024-12-09 04:16:18.212840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.766 [2024-12-09 04:16:18.212854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.766 [2024-12-09 04:16:18.212867] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.766 [2024-12-09 04:16:18.225132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.766 [2024-12-09 04:16:18.225652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.766 [2024-12-09 04:16:18.225680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.766 [2024-12-09 04:16:18.225711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.766 [2024-12-09 04:16:18.225963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.766 [2024-12-09 04:16:18.226158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.766 [2024-12-09 04:16:18.226176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.766 [2024-12-09 04:16:18.226188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.766 [2024-12-09 04:16:18.226199] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.766 [2024-12-09 04:16:18.238381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.766 [2024-12-09 04:16:18.238834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.766 [2024-12-09 04:16:18.238862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.766 [2024-12-09 04:16:18.238892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.766 [2024-12-09 04:16:18.239135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.766 [2024-12-09 04:16:18.239391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.766 [2024-12-09 04:16:18.239411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.766 [2024-12-09 04:16:18.239424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.766 [2024-12-09 04:16:18.239436] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.766 [2024-12-09 04:16:18.251659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.766 [2024-12-09 04:16:18.252071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.766 [2024-12-09 04:16:18.252097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.766 [2024-12-09 04:16:18.252127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.766 [2024-12-09 04:16:18.252376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.766 [2024-12-09 04:16:18.252578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.766 [2024-12-09 04:16:18.252611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.766 [2024-12-09 04:16:18.252624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.767 [2024-12-09 04:16:18.252635] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.767 [2024-12-09 04:16:18.264756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.767 [2024-12-09 04:16:18.265119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.767 [2024-12-09 04:16:18.265145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.767 [2024-12-09 04:16:18.265160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.767 [2024-12-09 04:16:18.265407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.767 [2024-12-09 04:16:18.265625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.767 [2024-12-09 04:16:18.265644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.767 [2024-12-09 04:16:18.265671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.767 [2024-12-09 04:16:18.265683] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.767 [2024-12-09 04:16:18.277996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.767 [2024-12-09 04:16:18.278392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.767 [2024-12-09 04:16:18.278420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.767 [2024-12-09 04:16:18.278441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.767 [2024-12-09 04:16:18.278690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.767 [2024-12-09 04:16:18.278903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.767 [2024-12-09 04:16:18.278921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.767 [2024-12-09 04:16:18.278933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.767 [2024-12-09 04:16:18.278944] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.767 [2024-12-09 04:16:18.291078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.767 [2024-12-09 04:16:18.291466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.767 [2024-12-09 04:16:18.291508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.767 [2024-12-09 04:16:18.291524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.767 [2024-12-09 04:16:18.291750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.767 [2024-12-09 04:16:18.291962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.767 [2024-12-09 04:16:18.291980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.767 [2024-12-09 04:16:18.291992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.767 [2024-12-09 04:16:18.292004] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.767 [2024-12-09 04:16:18.304137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.767 [2024-12-09 04:16:18.304511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.767 [2024-12-09 04:16:18.304554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.767 [2024-12-09 04:16:18.304569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.767 [2024-12-09 04:16:18.304825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.767 [2024-12-09 04:16:18.305036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.767 [2024-12-09 04:16:18.305054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.767 [2024-12-09 04:16:18.305066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.767 [2024-12-09 04:16:18.305077] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.767 [2024-12-09 04:16:18.317301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.767 [2024-12-09 04:16:18.317800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.767 [2024-12-09 04:16:18.317841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.767 [2024-12-09 04:16:18.317857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.767 [2024-12-09 04:16:18.318103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.767 [2024-12-09 04:16:18.318325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.767 [2024-12-09 04:16:18.318350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.767 [2024-12-09 04:16:18.318363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.767 [2024-12-09 04:16:18.318374] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:49.767 [2024-12-09 04:16:18.330409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:49.767 [2024-12-09 04:16:18.330899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:49.767 [2024-12-09 04:16:18.330925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:49.767 [2024-12-09 04:16:18.330956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:49.767 [2024-12-09 04:16:18.331202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:49.767 [2024-12-09 04:16:18.331432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:49.767 [2024-12-09 04:16:18.331453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:49.767 [2024-12-09 04:16:18.331466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:49.767 [2024-12-09 04:16:18.331478] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.026 [2024-12-09 04:16:18.343712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.026 [2024-12-09 04:16:18.344195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.026 [2024-12-09 04:16:18.344248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.026 [2024-12-09 04:16:18.344265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.026 [2024-12-09 04:16:18.344492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.026 [2024-12-09 04:16:18.344743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.026 [2024-12-09 04:16:18.344761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.026 [2024-12-09 04:16:18.344774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.026 [2024-12-09 04:16:18.344786] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.026 [2024-12-09 04:16:18.356808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.026 [2024-12-09 04:16:18.357302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.026 [2024-12-09 04:16:18.357329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.026 [2024-12-09 04:16:18.357360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.026 [2024-12-09 04:16:18.357612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.026 [2024-12-09 04:16:18.357823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.026 [2024-12-09 04:16:18.357841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.026 [2024-12-09 04:16:18.357853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.026 [2024-12-09 04:16:18.357868] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.026 [2024-12-09 04:16:18.369884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.026 [2024-12-09 04:16:18.370248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.026 [2024-12-09 04:16:18.370296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.026 [2024-12-09 04:16:18.370313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.026 [2024-12-09 04:16:18.370558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.026 [2024-12-09 04:16:18.370790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.026 [2024-12-09 04:16:18.370808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.026 [2024-12-09 04:16:18.370820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.026 [2024-12-09 04:16:18.370831] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.026 [2024-12-09 04:16:18.383073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.026 [2024-12-09 04:16:18.383536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.026 [2024-12-09 04:16:18.383583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.026 [2024-12-09 04:16:18.383599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.026 [2024-12-09 04:16:18.383843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.026 [2024-12-09 04:16:18.384038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.026 [2024-12-09 04:16:18.384056] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.026 [2024-12-09 04:16:18.384069] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.026 [2024-12-09 04:16:18.384080] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.026 [2024-12-09 04:16:18.396204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.026 [2024-12-09 04:16:18.396646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.026 [2024-12-09 04:16:18.396674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.026 [2024-12-09 04:16:18.396690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.026 [2024-12-09 04:16:18.396933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.026 [2024-12-09 04:16:18.397129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.026 [2024-12-09 04:16:18.397147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.026 [2024-12-09 04:16:18.397159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.026 [2024-12-09 04:16:18.397170] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.026 [2024-12-09 04:16:18.409234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.026 [2024-12-09 04:16:18.409583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.026 [2024-12-09 04:16:18.409610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.026 [2024-12-09 04:16:18.409625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.026 [2024-12-09 04:16:18.409851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.026 [2024-12-09 04:16:18.410047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.026 [2024-12-09 04:16:18.410065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.026 [2024-12-09 04:16:18.410077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.026 [2024-12-09 04:16:18.410089] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.026 [2024-12-09 04:16:18.422316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.026 [2024-12-09 04:16:18.422743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.026 [2024-12-09 04:16:18.422802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.026 [2024-12-09 04:16:18.422817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.026 [2024-12-09 04:16:18.423055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.026 [2024-12-09 04:16:18.423265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.026 [2024-12-09 04:16:18.423369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.026 [2024-12-09 04:16:18.423383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.026 [2024-12-09 04:16:18.423395] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 341089 Killed "${NVMF_APP[@]}" "$@" 00:25:50.026 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:50.026 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:50.026 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:50.026 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:50.026 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:50.026 [2024-12-09 04:16:18.435902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.026 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=342045 00:25:50.026 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:50.026 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 342045 00:25:50.026 [2024-12-09 04:16:18.436311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.026 [2024-12-09 04:16:18.436341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.026 [2024-12-09 04:16:18.436357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.026 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 342045 ']' 00:25:50.026 [2024-12-09 04:16:18.436588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.026 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.027 [2024-12-09 04:16:18.436813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.027 [2024-12-09 04:16:18.436833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.027 [2024-12-09 04:16:18.436849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.027 [2024-12-09 04:16:18.436862] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.027 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:50.027 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.027 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:50.027 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:50.027 [2024-12-09 04:16:18.449245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.027 [2024-12-09 04:16:18.449676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.027 [2024-12-09 04:16:18.449718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.027 [2024-12-09 04:16:18.449733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.027 [2024-12-09 04:16:18.449982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.027 [2024-12-09 04:16:18.450193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.027 [2024-12-09 04:16:18.450212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.027 [2024-12-09 04:16:18.450224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.027 [2024-12-09 04:16:18.450235] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.027 [2024-12-09 04:16:18.462526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.027 [2024-12-09 04:16:18.462997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.027 [2024-12-09 04:16:18.463050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.027 [2024-12-09 04:16:18.463066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.027 [2024-12-09 04:16:18.463326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.027 [2024-12-09 04:16:18.463549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.027 [2024-12-09 04:16:18.463570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.027 [2024-12-09 04:16:18.463583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.027 [2024-12-09 04:16:18.463596] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.027 [2024-12-09 04:16:18.475859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.027 [2024-12-09 04:16:18.476299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.027 [2024-12-09 04:16:18.476327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.027 [2024-12-09 04:16:18.476348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.027 [2024-12-09 04:16:18.476583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.027 [2024-12-09 04:16:18.476799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.027 [2024-12-09 04:16:18.476818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.027 [2024-12-09 04:16:18.476831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.027 [2024-12-09 04:16:18.476843] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.027 [2024-12-09 04:16:18.488162] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:25:50.027 [2024-12-09 04:16:18.488238] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.027 [2024-12-09 04:16:18.489237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.027 [2024-12-09 04:16:18.489612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.027 [2024-12-09 04:16:18.489640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.027 [2024-12-09 04:16:18.489656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.027 [2024-12-09 04:16:18.489881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.027 [2024-12-09 04:16:18.490093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.027 [2024-12-09 04:16:18.490112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.027 [2024-12-09 04:16:18.490124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.027 [2024-12-09 04:16:18.490136] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.027 [2024-12-09 04:16:18.502918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.027 [2024-12-09 04:16:18.503408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.027 [2024-12-09 04:16:18.503438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.027 [2024-12-09 04:16:18.503454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.027 [2024-12-09 04:16:18.503708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.027 [2024-12-09 04:16:18.503911] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.027 [2024-12-09 04:16:18.503930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.027 [2024-12-09 04:16:18.503942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.027 [2024-12-09 04:16:18.503954] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.027 [2024-12-09 04:16:18.516500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.027 [2024-12-09 04:16:18.516960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.027 [2024-12-09 04:16:18.517007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.027 [2024-12-09 04:16:18.517023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.027 [2024-12-09 04:16:18.517307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.027 [2024-12-09 04:16:18.517522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.027 [2024-12-09 04:16:18.517543] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.027 [2024-12-09 04:16:18.517557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.027 [2024-12-09 04:16:18.517576] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.027 [2024-12-09 04:16:18.529876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.027 [2024-12-09 04:16:18.530251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.027 [2024-12-09 04:16:18.530286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.027 [2024-12-09 04:16:18.530304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.027 [2024-12-09 04:16:18.530520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.027 [2024-12-09 04:16:18.530765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.027 [2024-12-09 04:16:18.530784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.027 [2024-12-09 04:16:18.530796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.027 [2024-12-09 04:16:18.530808] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.027 [2024-12-09 04:16:18.543342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.027 [2024-12-09 04:16:18.543744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.027 [2024-12-09 04:16:18.543787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.027 [2024-12-09 04:16:18.543802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.027 [2024-12-09 04:16:18.544060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.027 [2024-12-09 04:16:18.544314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.027 [2024-12-09 04:16:18.544353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.027 [2024-12-09 04:16:18.544367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.027 [2024-12-09 04:16:18.544380] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.027 [2024-12-09 04:16:18.556780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.027 [2024-12-09 04:16:18.557129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.027 [2024-12-09 04:16:18.557157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.027 [2024-12-09 04:16:18.557174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.027 [2024-12-09 04:16:18.557437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.028 [2024-12-09 04:16:18.557659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.028 [2024-12-09 04:16:18.557678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.028 [2024-12-09 04:16:18.557690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.028 [2024-12-09 04:16:18.557702] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.028 [2024-12-09 04:16:18.563542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:50.028 [2024-12-09 04:16:18.570160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.028 [2024-12-09 04:16:18.570595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.028 [2024-12-09 04:16:18.570628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.028 [2024-12-09 04:16:18.570646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.028 [2024-12-09 04:16:18.570894] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.028 [2024-12-09 04:16:18.571098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.028 [2024-12-09 04:16:18.571117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.028 [2024-12-09 04:16:18.571131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.028 [2024-12-09 04:16:18.571145] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.028 [2024-12-09 04:16:18.583584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.028 [2024-12-09 04:16:18.584129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.028 [2024-12-09 04:16:18.584179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.028 [2024-12-09 04:16:18.584198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.028 [2024-12-09 04:16:18.584468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.028 [2024-12-09 04:16:18.584713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.028 [2024-12-09 04:16:18.584732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.028 [2024-12-09 04:16:18.584748] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.028 [2024-12-09 04:16:18.584762] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.028 [2024-12-09 04:16:18.597101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.028 [2024-12-09 04:16:18.597467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.028 [2024-12-09 04:16:18.597495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.028 [2024-12-09 04:16:18.597512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.028 [2024-12-09 04:16:18.597731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.028 [2024-12-09 04:16:18.597972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.028 [2024-12-09 04:16:18.598002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.028 [2024-12-09 04:16:18.598015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.028 [2024-12-09 04:16:18.598027] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.285 [2024-12-09 04:16:18.610411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.285 [2024-12-09 04:16:18.610749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-12-09 04:16:18.610776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.285 [2024-12-09 04:16:18.610792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.285 [2024-12-09 04:16:18.611019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.285 [2024-12-09 04:16:18.611236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.285 [2024-12-09 04:16:18.611255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.285 [2024-12-09 04:16:18.611268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.285 [2024-12-09 04:16:18.611307] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.285 [2024-12-09 04:16:18.622498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.285 [2024-12-09 04:16:18.622531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.285 [2024-12-09 04:16:18.622559] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.285 [2024-12-09 04:16:18.622571] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.285 [2024-12-09 04:16:18.622592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.285 [2024-12-09 04:16:18.623672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.285 [2024-12-09 04:16:18.624100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-12-09 04:16:18.624070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:50.285 [2024-12-09 04:16:18.624130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.285 [2024-12-09 04:16:18.624151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.285 [2024-12-09 04:16:18.624099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:50.285 [2024-12-09 04:16:18.624103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.285 [2024-12-09 04:16:18.624380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.285 [2024-12-09 04:16:18.624618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.285 [2024-12-09 04:16:18.624638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.285 [2024-12-09 04:16:18.624652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.285 [2024-12-09 04:16:18.624664] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.285 [2024-12-09 04:16:18.637228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.285 [2024-12-09 04:16:18.637792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.285 [2024-12-09 04:16:18.637841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.286 [2024-12-09 04:16:18.637861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.286 [2024-12-09 04:16:18.638102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.286 [2024-12-09 04:16:18.638337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.286 [2024-12-09 04:16:18.638359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.286 [2024-12-09 04:16:18.638375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.286 [2024-12-09 04:16:18.638390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.286 [2024-12-09 04:16:18.650897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.286 [2024-12-09 04:16:18.651444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-12-09 04:16:18.651484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.286 [2024-12-09 04:16:18.651504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.286 [2024-12-09 04:16:18.651739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.286 [2024-12-09 04:16:18.651964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.286 [2024-12-09 04:16:18.651986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.286 [2024-12-09 04:16:18.652003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.286 [2024-12-09 04:16:18.652019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.286 [2024-12-09 04:16:18.664606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.286 [2024-12-09 04:16:18.665132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-12-09 04:16:18.665171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.286 [2024-12-09 04:16:18.665191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.286 [2024-12-09 04:16:18.665428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.286 [2024-12-09 04:16:18.665668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.286 [2024-12-09 04:16:18.665689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.286 [2024-12-09 04:16:18.665704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.286 [2024-12-09 04:16:18.665720] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.286 [2024-12-09 04:16:18.678257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.286 [2024-12-09 04:16:18.678770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-12-09 04:16:18.678807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.286 [2024-12-09 04:16:18.678827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.286 [2024-12-09 04:16:18.679075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.286 [2024-12-09 04:16:18.679313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.286 [2024-12-09 04:16:18.679335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.286 [2024-12-09 04:16:18.679350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.286 [2024-12-09 04:16:18.679365] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.286 [2024-12-09 04:16:18.691814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.286 [2024-12-09 04:16:18.692404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-12-09 04:16:18.692449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.286 [2024-12-09 04:16:18.692469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.286 [2024-12-09 04:16:18.692713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.286 [2024-12-09 04:16:18.692932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.286 [2024-12-09 04:16:18.692953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.286 [2024-12-09 04:16:18.692968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.286 [2024-12-09 04:16:18.692984] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.286 [2024-12-09 04:16:18.705449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.286 [2024-12-09 04:16:18.706011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-12-09 04:16:18.706051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.286 [2024-12-09 04:16:18.706071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.286 [2024-12-09 04:16:18.706323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.286 [2024-12-09 04:16:18.706541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.286 [2024-12-09 04:16:18.706563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.286 [2024-12-09 04:16:18.706578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.286 [2024-12-09 04:16:18.706594] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.286 [2024-12-09 04:16:18.718948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.286 [2024-12-09 04:16:18.719292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-12-09 04:16:18.719320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.286 [2024-12-09 04:16:18.719337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.286 [2024-12-09 04:16:18.719555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.286 [2024-12-09 04:16:18.719777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.286 [2024-12-09 04:16:18.719808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.286 [2024-12-09 04:16:18.719822] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.286 [2024-12-09 04:16:18.719835] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.286 [2024-12-09 04:16:18.732642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.286 [2024-12-09 04:16:18.732967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-12-09 04:16:18.732996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.286 [2024-12-09 04:16:18.733013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.286 [2024-12-09 04:16:18.733230] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.286 [2024-12-09 04:16:18.733461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.286 [2024-12-09 04:16:18.733483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.286 [2024-12-09 04:16:18.733497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.286 [2024-12-09 04:16:18.733509] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:50.286 [2024-12-09 04:16:18.746435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.286 [2024-12-09 04:16:18.746819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-12-09 04:16:18.746848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.286 [2024-12-09 04:16:18.746864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.286 [2024-12-09 04:16:18.747096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.286 [2024-12-09 04:16:18.747341] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.286 [2024-12-09 04:16:18.747363] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.286 [2024-12-09 04:16:18.747377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.286 [2024-12-09 04:16:18.747390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.286 [2024-12-09 04:16:18.759951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.286 [2024-12-09 04:16:18.760321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-12-09 04:16:18.760350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.286 [2024-12-09 04:16:18.760366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.286 [2024-12-09 04:16:18.760599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.286 [2024-12-09 04:16:18.760820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.286 [2024-12-09 04:16:18.760841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.286 [2024-12-09 04:16:18.760854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.286 [2024-12-09 04:16:18.760866] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:50.286 [2024-12-09 04:16:18.773629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.286 [2024-12-09 04:16:18.773787] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.286 [2024-12-09 04:16:18.774003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-12-09 04:16:18.774031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.286 [2024-12-09 04:16:18.774047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.286 [2024-12-09 04:16:18.774264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.286 [2024-12-09 04:16:18.774497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.286 [2024-12-09 04:16:18.774518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.286 [2024-12-09 04:16:18.774532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.286 [2024-12-09 04:16:18.774544] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:50.286 [2024-12-09 04:16:18.787361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.286 [2024-12-09 04:16:18.787792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-12-09 04:16:18.787822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.286 [2024-12-09 04:16:18.787840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.286 [2024-12-09 04:16:18.788075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.286 [2024-12-09 04:16:18.788321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.286 [2024-12-09 04:16:18.788343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.286 [2024-12-09 04:16:18.788358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.286 [2024-12-09 04:16:18.788373] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.286 [2024-12-09 04:16:18.800981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.286 [2024-12-09 04:16:18.801368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-12-09 04:16:18.801397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.286 [2024-12-09 04:16:18.801413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.286 [2024-12-09 04:16:18.801646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.286 [2024-12-09 04:16:18.801861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.286 [2024-12-09 04:16:18.801881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.286 [2024-12-09 04:16:18.801894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.286 [2024-12-09 04:16:18.801906] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.286 [2024-12-09 04:16:18.814655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.286 [2024-12-09 04:16:18.815066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-12-09 04:16:18.815099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.286 [2024-12-09 04:16:18.815116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.286 [2024-12-09 04:16:18.815348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.286 [2024-12-09 04:16:18.815597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.286 [2024-12-09 04:16:18.815618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.286 [2024-12-09 04:16:18.815633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.286 [2024-12-09 04:16:18.815647] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.286 Malloc0 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:50.286 [2024-12-09 04:16:18.828194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.286 [2024-12-09 04:16:18.828570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:50.286 [2024-12-09 04:16:18.828599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e54660 with addr=10.0.0.2, port=4420 00:25:50.286 [2024-12-09 04:16:18.828616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e54660 is same with the state(6) to be set 00:25:50.286 [2024-12-09 04:16:18.828834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e54660 (9): Bad file descriptor 00:25:50.286 [2024-12-09 04:16:18.829081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:50.286 [2024-12-09 04:16:18.829103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:50.286 [2024-12-09 04:16:18.829117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:50.286 [2024-12-09 04:16:18.829130] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:50.286 [2024-12-09 04:16:18.841382] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.286 [2024-12-09 04:16:18.841925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:50.286 04:16:18 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 341384 00:25:50.542 3533.50 IOPS, 13.80 MiB/s [2024-12-09T03:16:19.118Z] [2024-12-09 04:16:18.992728] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:25:52.428 4048.57 IOPS, 15.81 MiB/s [2024-12-09T03:16:21.933Z] 4567.50 IOPS, 17.84 MiB/s [2024-12-09T03:16:23.304Z] 4986.00 IOPS, 19.48 MiB/s [2024-12-09T03:16:24.236Z] 5309.40 IOPS, 20.74 MiB/s [2024-12-09T03:16:25.169Z] 5570.64 IOPS, 21.76 MiB/s [2024-12-09T03:16:26.100Z] 5792.83 IOPS, 22.63 MiB/s [2024-12-09T03:16:27.032Z] 5987.85 IOPS, 23.39 MiB/s [2024-12-09T03:16:27.964Z] 6153.64 IOPS, 24.04 MiB/s [2024-12-09T03:16:27.964Z] 6301.80 IOPS, 24.62 MiB/s 00:25:59.388 Latency(us) 00:25:59.388 [2024-12-09T03:16:27.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:59.388 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:59.388 Verification LBA range: start 0x0 length 0x4000 00:25:59.388 Nvme1n1 : 15.01 6300.85 24.61 10338.14 0.00 7668.20 807.06 20874.43 00:25:59.388 [2024-12-09T03:16:27.964Z] =================================================================================================================== 00:25:59.388 [2024-12-09T03:16:27.964Z] Total : 6300.85 24.61 10338.14 0.00 7668.20 807.06 20874.43 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:59.646 rmmod nvme_tcp 00:25:59.646 rmmod nvme_fabrics 00:25:59.646 rmmod nvme_keyring 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 342045 ']' 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 342045 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 342045 ']' 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 342045 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:59.646 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 342045 00:25:59.905 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:59.905 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:59.905 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 342045' 00:25:59.905 killing process with pid 342045 00:25:59.905 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 342045 00:25:59.905 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 342045 00:25:59.905 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:59.905 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:59.905 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:59.905 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:25:59.905 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:25:59.905 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:59.905 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:25:59.905 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:59.905 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:59.905 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.905 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.905 04:16:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:02.443 00:26:02.443 real 0m22.842s 00:26:02.443 user 0m59.761s 00:26:02.443 sys 0m4.873s 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:02.443 ************************************ 00:26:02.443 END TEST nvmf_bdevperf 00:26:02.443 ************************************ 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:02.443 ************************************ 00:26:02.443 START TEST nvmf_target_disconnect 00:26:02.443 ************************************ 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:02.443 * Looking for test storage... 00:26:02.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:02.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.443 --rc genhtml_branch_coverage=1 00:26:02.443 --rc genhtml_function_coverage=1 00:26:02.443 --rc genhtml_legend=1 00:26:02.443 --rc geninfo_all_blocks=1 00:26:02.443 --rc geninfo_unexecuted_blocks=1 00:26:02.443 00:26:02.443 ' 00:26:02.443 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:02.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.443 --rc genhtml_branch_coverage=1 00:26:02.443 --rc genhtml_function_coverage=1 00:26:02.443 --rc genhtml_legend=1 00:26:02.443 --rc geninfo_all_blocks=1 00:26:02.444 --rc geninfo_unexecuted_blocks=1 00:26:02.444 00:26:02.444 ' 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:02.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.444 --rc genhtml_branch_coverage=1 00:26:02.444 --rc genhtml_function_coverage=1 00:26:02.444 --rc genhtml_legend=1 00:26:02.444 --rc geninfo_all_blocks=1 00:26:02.444 --rc geninfo_unexecuted_blocks=1 00:26:02.444 00:26:02.444 ' 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:02.444 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:02.444 --rc genhtml_branch_coverage=1 00:26:02.444 --rc genhtml_function_coverage=1 00:26:02.444 --rc genhtml_legend=1 00:26:02.444 --rc geninfo_all_blocks=1 00:26:02.444 --rc geninfo_unexecuted_blocks=1 00:26:02.444 00:26:02.444 ' 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:02.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:26:02.444 04:16:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:04.349 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:04.349 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:26:04.349 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:04.349 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:04.349 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:04.350 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:04.350 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:04.350 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:04.350 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:04.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:04.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:26:04.350 00:26:04.350 --- 10.0.0.2 ping statistics --- 00:26:04.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.350 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:04.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:04.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:26:04.350 00:26:04.350 --- 10.0.0.1 ping statistics --- 00:26:04.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:04.350 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:04.350 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:04.351 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:04.351 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:04.351 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:04.351 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:04.351 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:04.351 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:04.351 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:04.351 ************************************ 00:26:04.351 START TEST nvmf_target_disconnect_tc1 00:26:04.351 ************************************ 00:26:04.351 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:26:04.351 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:04.351 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:26:04.351 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:04.351 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:04.351 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:04.351 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:04.351 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:04.351 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:04.351 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:04.351 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:04.351 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:04.351 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:04.623 [2024-12-09 04:16:32.957860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.623 [2024-12-09 04:16:32.957923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x799f40 with addr=10.0.0.2, port=4420 00:26:04.623 [2024-12-09 04:16:32.957957] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:04.623 [2024-12-09 04:16:32.957978] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:04.623 [2024-12-09 04:16:32.957993] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:26:04.623 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:04.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:04.623 Initializing NVMe Controllers 00:26:04.623 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:26:04.623 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:04.623 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:04.623 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:04.623 00:26:04.623 real 0m0.095s 00:26:04.623 user 0m0.039s 00:26:04.623 sys 0m0.056s 00:26:04.623 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:04.623 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:04.623 ************************************ 00:26:04.623 END TEST nvmf_target_disconnect_tc1 00:26:04.623 ************************************ 00:26:04.623 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:04.623 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:04.623 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:04.623 04:16:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:04.623 ************************************ 00:26:04.623 START TEST nvmf_target_disconnect_tc2 00:26:04.623 ************************************ 00:26:04.623 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:26:04.623 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:04.623 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:04.623 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:04.623 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:04.623 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:04.623 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=345207 00:26:04.623 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:04.623 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 345207 00:26:04.623 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 345207 ']' 00:26:04.623 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:04.623 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:04.623 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:04.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:04.623 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:04.623 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:04.623 [2024-12-09 04:16:33.076611] Starting SPDK v25.01-pre git sha1 c4269c6e2 / DPDK 24.03.0 initialization... 00:26:04.623 [2024-12-09 04:16:33.076701] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:04.623 [2024-12-09 04:16:33.148022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:04.881 [2024-12-09 04:16:33.204052] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:04.881 [2024-12-09 04:16:33.204109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:04.881 [2024-12-09 04:16:33.204132] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:04.881 [2024-12-09 04:16:33.204142] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:04.881 [2024-12-09 04:16:33.204151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:04.881 [2024-12-09 04:16:33.205719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:04.881 [2024-12-09 04:16:33.205783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:04.881 [2024-12-09 04:16:33.205888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:04.881 [2024-12-09 04:16:33.205897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:04.881 Malloc0 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:04.881 [2024-12-09 04:16:33.396581] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:04.881 [2024-12-09 04:16:33.424886] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=345231 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:04.881 04:16:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:07.424 04:16:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 345207 00:26:07.424 04:16:35 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:07.424 Read completed with error (sct=0, sc=8) 00:26:07.424 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 [2024-12-09 04:16:35.451082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 [2024-12-09 04:16:35.451470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Write completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 Read completed with error (sct=0, sc=8) 00:26:07.425 starting I/O failed 00:26:07.425 [2024-12-09 04:16:35.451809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:07.425 [2024-12-09 04:16:35.452005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.425 [2024-12-09 04:16:35.452057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.425 qpair failed and we were unable to recover it. 00:26:07.425 [2024-12-09 04:16:35.452198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.425 [2024-12-09 04:16:35.452233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.425 qpair failed and we were unable to recover it. 00:26:07.425 [2024-12-09 04:16:35.452371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.425 [2024-12-09 04:16:35.452399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.425 qpair failed and we were unable to recover it. 00:26:07.425 [2024-12-09 04:16:35.452482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.452509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.452638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.452667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.452792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.452820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.452920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.452947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.453060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.453086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.453238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.453265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.453369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.453396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.453494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.453520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.453619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.453646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.453805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.453831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.453924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.453950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.454089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.454130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.454231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.454259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.454404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.454431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.454532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.454559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.454714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.454740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.454847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.454873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.455051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.455101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.455181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.455207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.455318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.455345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.455456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.455482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.455606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.455632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.455721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.455747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.455863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.455901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.455987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.456012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.456125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.456174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.456296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.456325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.456412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.456438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.456580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.456607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.456755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.456781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.456870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.456897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.457017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.457045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.457156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.457182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.457262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.457294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.457411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.457437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.457521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.457547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.457662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.426 [2024-12-09 04:16:35.457688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.426 qpair failed and we were unable to recover it. 00:26:07.426 [2024-12-09 04:16:35.457776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.457802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.457944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.457971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.458080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.458106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.458225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.458253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.458396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.458436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.458525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.458554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.458673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.458701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.458813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.458840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.458931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.458958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.459063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.459091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.459174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.459202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.459321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.459348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.459462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.459489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.459647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.459688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.459784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.459811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.459954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.459981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.460089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.460117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.460203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.460230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.460324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.460351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.460431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.460457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.460586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.460612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.460694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.460721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.460831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.460858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.460997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.461023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.461178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.461219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.461315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.461344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.461436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.461467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.461581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.461608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.461792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.461851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.461968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.461995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.462104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.462131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.462204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.462230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.462355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.462382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.462507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.462533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.462621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.462647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.427 [2024-12-09 04:16:35.462740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.427 [2024-12-09 04:16:35.462765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.427 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.462881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.462910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.463019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.463046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.463161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.463187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.463319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.463347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.463484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.463525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.463627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.463656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.463750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.463779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.463895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.463922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.464037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.464064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.464187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.464215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.464326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.464353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.464439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.464466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.464573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.464600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.464746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.464772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.464857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.464884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.464994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.465020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.465142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.465171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.465294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.465326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.465443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.465469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.465554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.465581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.465718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.465744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.465936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.465963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.466104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.466131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.466249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.466289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.466374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.466400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.466487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.466513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.466625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.466653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.466779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.466808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.466949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.466975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.467056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.467082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.467191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.467217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.467340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.467367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.467483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.467511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.467626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.467654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.467744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.467771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.467882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.467909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.467987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.428 [2024-12-09 04:16:35.468014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.428 qpair failed and we were unable to recover it. 00:26:07.428 [2024-12-09 04:16:35.468102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.468130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.468268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.468304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.468397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.468425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.468551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.468591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.468713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.468741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.468854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.468881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.468965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.468991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.469066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.469098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.469228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.469268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.469373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.469402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.469489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.469516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.469605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.469634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.469725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.469752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.469889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.469916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.470025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.470052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.470158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.470185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.470303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.470331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.470455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.470483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.470640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.470670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.470788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.470816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.470956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.470983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.471101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.471129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.471211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.471237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.471359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.471387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.471477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.471505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.471597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.471624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.471767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.471794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.471880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.471907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.471992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.472018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.472103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.472128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.472204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.472229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.472386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.472412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.429 qpair failed and we were unable to recover it. 00:26:07.429 [2024-12-09 04:16:35.472496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.429 [2024-12-09 04:16:35.472522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.472643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.472671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 Read completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Read completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Read completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Read completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Read completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Read completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Read completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Read completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Read completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Read completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Read completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Read completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Read completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Write completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Read completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Read completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Write completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Read completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Write completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Write completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Read completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Read completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Write completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Read completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Write completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Read completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Write completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Write completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Write completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Read completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Write completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 Write completed with error (sct=0, sc=8) 00:26:07.430 starting I/O failed 00:26:07.430 [2024-12-09 04:16:35.472978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:07.430 [2024-12-09 04:16:35.473041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.473069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.473155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.473181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.473298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.473325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.473412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.473438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.473589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.473615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.473838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.473890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.473995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.474026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.474144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.474171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.474300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.474339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.474460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.474488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.474584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.474611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.474692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.474719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.474870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.474917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.475011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.475038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.475162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.475200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.475324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.475352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.475494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.475523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.475610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.475635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.475772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.475798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.475889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.475914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.476036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.476063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.476149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.476177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.476293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.476321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.476438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.476465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.476574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.430 [2024-12-09 04:16:35.476601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.430 qpair failed and we were unable to recover it. 00:26:07.430 [2024-12-09 04:16:35.476688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.476714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.476792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.476817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.476931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.476958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.477059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.477097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.477193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.477219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.477341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.477369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.477461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.477486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.477598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.477622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.477746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.477801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.477981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.478030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.478145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.478171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.478304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.478342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.478451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.478479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.478594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.478620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.478762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.478788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.478928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.478954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.479072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.479097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.479180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.479207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.479302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.479332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.479441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.479468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.479546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.479571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.479679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.479705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.479824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.479850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.479960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.479986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.480097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.480135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.480253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.480286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.480406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.480432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.480513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.480538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.480642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.480667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.480748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.480773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.480852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.480877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.480954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.480979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.481084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.481108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.481191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.481219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.481316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.481355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.481476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.481510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.481599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.481625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.431 [2024-12-09 04:16:35.481711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.431 [2024-12-09 04:16:35.481738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.431 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.481862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.481888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.482083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.482144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.482253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.482284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.482399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.482425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.482515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.482541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.482735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.482788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.482998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.483023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.483134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.483159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.483240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.483265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.483382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.483407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.483499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.483524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.483605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.483630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.483742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.483767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.483851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.483876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.483960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.483985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.484075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.484114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.484263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.484298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.484410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.484436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.484534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.484560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.484671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.484698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.484806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.484832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.484909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.484935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.485047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.485073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.485191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.485217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.485331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.485358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.485480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.485519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.485599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.485626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.485705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.485731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.485844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.485870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.485984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.486010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.486096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.486123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.486210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.486235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.486363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.486388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.486503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.486530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.486619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.486646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.486763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.486790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.432 qpair failed and we were unable to recover it. 00:26:07.432 [2024-12-09 04:16:35.486877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.432 [2024-12-09 04:16:35.486904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.487019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.487044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.487146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.487172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.487298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.487326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.487438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.487463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.487578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.487604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.487744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.487771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.487891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.487917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.488068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.488106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.488201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.488228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.488355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.488382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.488472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.488498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.488709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.488736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.488951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.489000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.489112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.489138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.489261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.489298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.489421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.489448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.489541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.489566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.489784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.489812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.489927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.489953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.490082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.490109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.490224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.490250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.490396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.490422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.490511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.490537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.490628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.490654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.490771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.490797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.490886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.490914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.491030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.491057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.491174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.491205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.491349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.491376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.491494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.491519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.491596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.491621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.491762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.491787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.491901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.491927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.492040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.492065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.492169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.492196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.492283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.492310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.433 [2024-12-09 04:16:35.492397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.433 [2024-12-09 04:16:35.492424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.433 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.492515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.492540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.492659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.492685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.492793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.492818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.492935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.492960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.493038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.493063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.493190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.493229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.493335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.493363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.493458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.493483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.493593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.493618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.493823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.493849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.493936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.493961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.494062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.494089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.494197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.494222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.494334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.494361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.494446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.494471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.494587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.494613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.494752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.494777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.494910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.494935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.495044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.495070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.495195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.495234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.495359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.495387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.495480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.495507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.495620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.495646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.495731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.495757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.495898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.495923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.496060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.496120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.496214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.496240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.496377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.496416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.496516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.496543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.434 [2024-12-09 04:16:35.496664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.434 [2024-12-09 04:16:35.496690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.434 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.496803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.496836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.496944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.496969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.497096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.497121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.497234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.497262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.497409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.497435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.497549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.497574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.497685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.497711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.497819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.497844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.497934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.497960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.498058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.498084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.498204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.498241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.498368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.498396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.498511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.498538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.498676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.498702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.498825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.498851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.498937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.498963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.499068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.499094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.499181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.499207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.499329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.499358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.499505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.499531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.499643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.499669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.499749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.499775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.499885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.499910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.500050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.500076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.500190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.500217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.500335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.500362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.500475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.500501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.500601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.500629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.500747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.500775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.500860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.500886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.500975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.501000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.501112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.501137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.501238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.501281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.501378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.501405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.501489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.435 [2024-12-09 04:16:35.501514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.435 qpair failed and we were unable to recover it. 00:26:07.435 [2024-12-09 04:16:35.501596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.501622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.501712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.501738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.501876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.501901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.502014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.502040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.502118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.502144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.502254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.502285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.502379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.502404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.502506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.502533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.502641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.502667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.502780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.502806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.502892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.502917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.503000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.503025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.503166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.503192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.503295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.503320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.503435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.503460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.503552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.503579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.503693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.503718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.503826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.503851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.503969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.503996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.504096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.504134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.504266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.504315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.504431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.504459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.504551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.504578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.504689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.504715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.504834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.504860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.504949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.504975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.505090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.505119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.505315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.505344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.505486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.505511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.505626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.505654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.505790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.505817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.505930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.505955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.506159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.506206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.506337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.506365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.506458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.506486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.436 [2024-12-09 04:16:35.506599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.436 [2024-12-09 04:16:35.506624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.436 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.506787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.506842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.507022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.507082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.507193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.507219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.507336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.507362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.507467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.507493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.507579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.507605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.507752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.507777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.507893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.507918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.508027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.508053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.508157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.508182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.508324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.508350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.508464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.508490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.508570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.508595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.508699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.508724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.508837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.508863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.508976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.509001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.509093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.509118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.509199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.509224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.509336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.509375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.509498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.509524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.509640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.509667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.509779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.509804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.509892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.509917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.510051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.510095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.510238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.510265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.510398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.510424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.510533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.510559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.510643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.510668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.510774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.510799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.510885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.510913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.511030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.511056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.511180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.511218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.511355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.511382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.511494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.511519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.511633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.511659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.511774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.437 [2024-12-09 04:16:35.511799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.437 qpair failed and we were unable to recover it. 00:26:07.437 [2024-12-09 04:16:35.511917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.511941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.512062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.512087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.512202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.512228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.512354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.512383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.512502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.512527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.512616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.512643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.512761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.512789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.512930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.512958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.513081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.513120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.513205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.513232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.513356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.513383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.513473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.513499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.513607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.513632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.513711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.513737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.513846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.513873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.514027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.514054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.514163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.514202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.514293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.514320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.514435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.514461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.514599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.514627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.514792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.514845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.514935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.514961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.515080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.515105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.515245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.515282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.515376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.515403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.515495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.515521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.515633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.515659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.515774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.515800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.515922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.515950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.516061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.516088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.516181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.516206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.516327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.516353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.516431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.516457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.516559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.516584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.516692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.516718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.516835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.516862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.516978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.517003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.517084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.438 [2024-12-09 04:16:35.517109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.438 qpair failed and we were unable to recover it. 00:26:07.438 [2024-12-09 04:16:35.517215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.517240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.517371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.517408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.517527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.517554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.517702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.517730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.517809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.517833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.517912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.517936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.518054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.518081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.518169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.518194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.518323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.518362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.518487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.518514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.518633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.518658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.518770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.518798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.518881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.518907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.519033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.519071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.519158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.519185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.519269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.519300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.519414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.519444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.519530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.519555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.519648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.519675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.519784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.519810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.519950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.519975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.520065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.520090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.520195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.520220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.520322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.520351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.520435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.520461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.520577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.520604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.520714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.520739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.520822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.520847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.520967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.520995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.521107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.521134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.521235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.439 [2024-12-09 04:16:35.521280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.439 qpair failed and we were unable to recover it. 00:26:07.439 [2024-12-09 04:16:35.521369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.521396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.521503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.521529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.521640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.521664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.521773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.521798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.521889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.521913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.522026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.522054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.522173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.522199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.522285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.522311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.522391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.522416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.522534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.522561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.522705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.522733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.522848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.522872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.522959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.522990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.523068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.523093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.523195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.523233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.523357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.523387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.523501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.523528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.523607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.523633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.523849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.523905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.524139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.524180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.524284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.524312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.524433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.524460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.524604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.524659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.524760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.524831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.524948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.524973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.525064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.525090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.525181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.525208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.525326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.525354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.525465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.525491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.525578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.525603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.525717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.525745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.525855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.525880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.525988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.526015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.440 [2024-12-09 04:16:35.526146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.440 [2024-12-09 04:16:35.526176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.440 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.526284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.526310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.526428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.526453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.526541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.526566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.526648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.526674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.526761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.526787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.526881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.526909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.527025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.527050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.527137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.527164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.527310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.527337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.527417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.527442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.527549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.527575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.527692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.527718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.527870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.527898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.528017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.528044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.528127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.528153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.528296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.528322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.528434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.528462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.528576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.528604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.528710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.528742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.528834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.528861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.528984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.529014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.529158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.529185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.529284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.529310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.529428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.529453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.529539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.529563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.529641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.529667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.529765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.529790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.529897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.529923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.530033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.530057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.530200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.530229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.530357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.530384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.530511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.530539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.530656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.530684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.530797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.530832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.441 qpair failed and we were unable to recover it. 00:26:07.441 [2024-12-09 04:16:35.530946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.441 [2024-12-09 04:16:35.530971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.531096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.531124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.531252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.531301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.531419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.531448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.531561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.531595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.531677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.531705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.531803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.531841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.531947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.531983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.532122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.532151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.532260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.532296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.532435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.532463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.532561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.532587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.532729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.532756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.532836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.532861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.532969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.532995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.533079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.533104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.533224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.533252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.533351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.533377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.533494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.533521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.533628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.533654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.533824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.533895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.534079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.534136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.534290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.534327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.534440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.534465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.534576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.534609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.534725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.534752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.534875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.534937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.535080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.535107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.535191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.535217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.535308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.535335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.535423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.535449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.535590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.535618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.535735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.535763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.535907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.535934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.536054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.536081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.536189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.536217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.536440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.442 [2024-12-09 04:16:35.536482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.442 qpair failed and we were unable to recover it. 00:26:07.442 [2024-12-09 04:16:35.536605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.536634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.536759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.536788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.536928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.536956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.537097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.537124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.537226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.537267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.537395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.537424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.537527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.537567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.537694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.537722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.537836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.537865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.537957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.537984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.538097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.538124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.538205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.538231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.538311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.538338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.538454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.538480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.538596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.538628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.538827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.538854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.539068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.539120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.539258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.539291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.539404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.539436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.539522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.539548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.539639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.539665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.539775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.539802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.539924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.539952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.540067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.540095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.540176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.540203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.540385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.540420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.540565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.540592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.540710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.540738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.540864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.540891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.541027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.541065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.541186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.541215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.541340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.541370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.541511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.541538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.541675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.541702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.541783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.541809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.541926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.541956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.542049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.542074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.542186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.443 [2024-12-09 04:16:35.542213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.443 qpair failed and we were unable to recover it. 00:26:07.443 [2024-12-09 04:16:35.542300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.542327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.542455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.542495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.542621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.542649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.542798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.542863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.543023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.543078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.543164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.543192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.543285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.543313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.543398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.543425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.543519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.543545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.543680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.543708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.543862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.543903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.543986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.544011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.544131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.544160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.544249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.544282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.544369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.544395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.544587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.544614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.544721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.544752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.544835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.544860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.544948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.544974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.545132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.545176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.545283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.545321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.545421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.545448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.545554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.545582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.545672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.545697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.545880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.545923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.546073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.546126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.546320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.546348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.546469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.546497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.546612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.546638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.546809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.546837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.546928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.546953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.547092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.547118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.547240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.547292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.444 [2024-12-09 04:16:35.547417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.444 [2024-12-09 04:16:35.547447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.444 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.547573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.547602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.547731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.547759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.547866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.547893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.547974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.548000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.548090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.548115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.548253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.548287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.548376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.548401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.548498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.548524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.548643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.548670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.548761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.548786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.548868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.548893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.549038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.549094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.549204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.549229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.549336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.549376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.549498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.549526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.549637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.549663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.549777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.549803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.549945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.549972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.550112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.550139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.550227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.550253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.550335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.550361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.550442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.550467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.550581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.550607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.550697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.550722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.550834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.550861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.550979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.551006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.551231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.551258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.551342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.551368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.551506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.551533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.551618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.551643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.551733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.551759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.551873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.551901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.551996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.552037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.552127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.552155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.552280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.552309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.552507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.552534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.552660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.552689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.445 qpair failed and we were unable to recover it. 00:26:07.445 [2024-12-09 04:16:35.552769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.445 [2024-12-09 04:16:35.552795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.552920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.552949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.553034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.553059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.553164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.553204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.553329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.553358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.553447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.553473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.553559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.553586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.553729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.553757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.553896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.553923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.554092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.554150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.554269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.554309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.554400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.554426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.554534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.554566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.554709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.554736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.554827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.554853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.554995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.555024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.555118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.555144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.555237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.555265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.555352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.555377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.555455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.555480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.555590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.555617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.555701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.555730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.555810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.555836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.555964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.556005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.556128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.556158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.556282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.556311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.556428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.556455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.556569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.556597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.556737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.556766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.556904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.556931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.557018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.557044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.557160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.557189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.557330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.557358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.557444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.557471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.557598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.557639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.557800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.557828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.557918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.557945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.558061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.446 [2024-12-09 04:16:35.558089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.446 qpair failed and we were unable to recover it. 00:26:07.446 [2024-12-09 04:16:35.558210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.558237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.558383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.558411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.558493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.558519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.558666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.558715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.558799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.558823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.558958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.558984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.559103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.559130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.559246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.559278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.559363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.559389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.559495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.559519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.559646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.559672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.559791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.559817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.559929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.559956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.560088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.560129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.560217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.560243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.560382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.560423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.560548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.560577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.560691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.560718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.560805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.560831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.560973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.561000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.561161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.561201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.561303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.561329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.561445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.561472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.561616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.561643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.561802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.561866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.561948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.561973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.562086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.562113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.562226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.562253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.562459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.562489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.562633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.562668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.562758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.562784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.562970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.563016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.563135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.563160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.563265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.563314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.563414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.563441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.563556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.563584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.447 [2024-12-09 04:16:35.563670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.447 [2024-12-09 04:16:35.563696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.447 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.563803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.563830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.563952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.563978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.564073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.564099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.564216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.564242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.564344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.564375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.564492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.564517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.564661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.564687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.564827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.564854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.564959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.564984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.565104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.565131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.565282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.565310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.565402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.565428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.565507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.565532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.565653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.565681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.565764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.565789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.565875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.565902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.566048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.566075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.566159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.566184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.566299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.566328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.566440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.566468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.566558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.566583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.566703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.566730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.566849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.566875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.566989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.567015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.567106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.567131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.567238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.567264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.567371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.567396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.567484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.567512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.567589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.567616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.567700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.567726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.567845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.567873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.567996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.448 [2024-12-09 04:16:35.568023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.448 qpair failed and we were unable to recover it. 00:26:07.448 [2024-12-09 04:16:35.568144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.568171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.568312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.568339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.568430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.568454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.568543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.568568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.568647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.568672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.568756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.568781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.568866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.568892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.569029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.569055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.569165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.569192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.569285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.569311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.569426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.569454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.569543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.569568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.569680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.569712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.569790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.569815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.569933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.569960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.570072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.570099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.570210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.570238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.570322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.570348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.570466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.570493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.570579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.570604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.570720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.570746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.570885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.570912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.571054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.571081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.571173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.571199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.571305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.571346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.571502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.571531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.571634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.571663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.571771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.571798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.571887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.571916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.572036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.572063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.572178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.572206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.572294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.572322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.572510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.572576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.572864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.572931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.449 qpair failed and we were unable to recover it. 00:26:07.449 [2024-12-09 04:16:35.573229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.449 [2024-12-09 04:16:35.573308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.573525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.573581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.573872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.573938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.574232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.574326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.574438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.574464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.574630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.574710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.574972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.574999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.575207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.575292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.575475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.575503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.575619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.575644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.575760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.575787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.575905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.575932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.576050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.576076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.576293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.576343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.576483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.576511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.576678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.576742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.576973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.577038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.577285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.577339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.577536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.577564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.577686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.577714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.577825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.577852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.577964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.577993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.578078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.578104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.578185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.578211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.578354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.578382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.578496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.578522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.578628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.578655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.578764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.578791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.578880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.578951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.579135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.579163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.579278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.579306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.579425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.579498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.579736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.579801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.580097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.580162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.580465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.580492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.580605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.580632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.580795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.580860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.450 [2024-12-09 04:16:35.581146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.450 [2024-12-09 04:16:35.581211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.450 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.581480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.581549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.581842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.581909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.582151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.582218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.582545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.582613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.582918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.582982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.583232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.583321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.583614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.583641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.583772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.583803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.583987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.584052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.584336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.584365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.584487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.584513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.584649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.584677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.584846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.584873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.584956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.584981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.585075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.585102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.585209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.585237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.585329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.585355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.585508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.585574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.585815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.585842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.585949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.585976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.586055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.586107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.586371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.586437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.586654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.586721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.587018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.587083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.587322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.587389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.587685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.587750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.587956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.588020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.588259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.588340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.588626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.588691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.588949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.589016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.589285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.589351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.589647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.589724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.589977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.590042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.590327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.590395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.590703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.590768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.591060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.591126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.451 [2024-12-09 04:16:35.591418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.451 [2024-12-09 04:16:35.591485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.451 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.591772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.591838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.592130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.592156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.592283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.592309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.592472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.592500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.592595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.592622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.592739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.592766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.592857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.592882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.592994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.593021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.593136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.593162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.593293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.593320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.593458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.593489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.593609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.593660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.593853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.593921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.594212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.594292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.594583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.594647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.594834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.594900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.595196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.595261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.595529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.595594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.595779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.595848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.596147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.596214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.596485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.596552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.596799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.596867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.597160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.597226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.597492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.597558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.597820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.597887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.598128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.598195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.598465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.598537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.598754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.598823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.599128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.599193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.599457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.599525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.599775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.599841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.600127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.600194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.600454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.600506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.600659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.600697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.600976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.601042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.601311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.601378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.601639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.601666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.452 qpair failed and we were unable to recover it. 00:26:07.452 [2024-12-09 04:16:35.601801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.452 [2024-12-09 04:16:35.601828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.601963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.601990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.602128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.602155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.602370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.602440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.602739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.602804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.603100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.603165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.603424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.603492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.603738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.603802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.604095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.604161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.604446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.604515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.604819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.604885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.605177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.605204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.605370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.605434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.605730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.605761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.605855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.605881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.606000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.606027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.606140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.606168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.606365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.606432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.606715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.606780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.607035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.607104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.607367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.607394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.607530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.607557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.607817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.607881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.608187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.608252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.608491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.608557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.608805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.608870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.609066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.609108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.609227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.609255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.609382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.609408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.609496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.609523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.609684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.609750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.610040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.610105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.453 qpair failed and we were unable to recover it. 00:26:07.453 [2024-12-09 04:16:35.610398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.453 [2024-12-09 04:16:35.610465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.610746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.610810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.611070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.611138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.611337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.611407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.611702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.611767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.612069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.612134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.612433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.612500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.612789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.612854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.613157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.613223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.613491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.613559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.613855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.613920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.614124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.614192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.614532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.614599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.614840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.614908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.615210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.615292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.615588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.615655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.615949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.616015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.616307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.616373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.616560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.616626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.616910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.616976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.617267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.617347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.617648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.617724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.617976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.618040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.618344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.618412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.618668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.618734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.619025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.619091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.619385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.619452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.619753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.619818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.620125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.620189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.620452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.620520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.620822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.620887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.621184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.621250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.621528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.621593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.621851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.621917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.622219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.622314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.622619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.622695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.622990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.454 [2024-12-09 04:16:35.623054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.454 qpair failed and we were unable to recover it. 00:26:07.454 [2024-12-09 04:16:35.623351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.623419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.623676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.623741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.623999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.624063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.624350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.624417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.624668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.624733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.624988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.625052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.625242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.625326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.625575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.625644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.625895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.625960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.626202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.626269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.626571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.626637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.626898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.626966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.627212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.627304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.627595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.627661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.627960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.628025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.628291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.628369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.628667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.628735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.629019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.629085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.629325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.629393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.629646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.629711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.629967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.630031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.630293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.630360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.630578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.630646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.630917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.630983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.631332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.631410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.631663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.631730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.632027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.632092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.632383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.632450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.632696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.632763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.633058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.633122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.633407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.633474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.633785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.633852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.634108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.634172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.634422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.634488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.634785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.634852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.635098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.635166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.455 [2024-12-09 04:16:35.635431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.455 [2024-12-09 04:16:35.635501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.455 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.635807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.635873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.636102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.636169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.636476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.636544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.636836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.636901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.637147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.637213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.637519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.637586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.637835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.637899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.638191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.638257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.638572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.638637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.638887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.638954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.639223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.639318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.639607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.639673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.639906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.639970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.640216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.640305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.640616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.640683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.640975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.641039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.641308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.641376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.641666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.641732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.641971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.642035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.642294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.642363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.642555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.642621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.642875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.642941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.643178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.643242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.643514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.643580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.643869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.643934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.644222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.644302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.644601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.644668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.644951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.645027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.645293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.645361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.645620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.645686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.645970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.646034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.646327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.646395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.646659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.646725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.647026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.647091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.647345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.647412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.647666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.647731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.456 qpair failed and we were unable to recover it. 00:26:07.456 [2024-12-09 04:16:35.648016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.456 [2024-12-09 04:16:35.648081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.648383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.648449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.648698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.648765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.649061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.649126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.649427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.649493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.649728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.649793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.649977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.650043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.650324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.650392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.650606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.650672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.650921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.650989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.651232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.651326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.651630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.651696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.651948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.652012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.652266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.652348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.652592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.652659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.652943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.653008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.653213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.653293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.653584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.653658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.653928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.653995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.654266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.654348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.654542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.654612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.654910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.654975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.655251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.655334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.655629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.655693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.655998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.656063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.656365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.656432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.656679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.656746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.657047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.657122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.657424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.657491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.657780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.657845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.658141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.658207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.658471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.658559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.658797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.658863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.659154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.659219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.659495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.659562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.659855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.659920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.660108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.457 [2024-12-09 04:16:35.660171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.457 qpair failed and we were unable to recover it. 00:26:07.457 [2024-12-09 04:16:35.660462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.660527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.660779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.660846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.661135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.661199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.661508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.661574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.661822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.661890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.662083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.662150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.662426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.662492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.662755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.662820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.663075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.663141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.663435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.663502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.663788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.663855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.664152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.664217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.664500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.664566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.664757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.664825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.665073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.665141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.665392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.665461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.665759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.665824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.666067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.666133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.666320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.666386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.666613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.666677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.666934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.667002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.667226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.667319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.667535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.667601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.667895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.667960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.668257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.668355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.668645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.668710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.668952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.669017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.669317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.669384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.669682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.669749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.670035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.670099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.670326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.670393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.670647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.670712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.458 [2024-12-09 04:16:35.671008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.458 [2024-12-09 04:16:35.671073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.458 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.671330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.671397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.671683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.671758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.672067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.672131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.672377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.672443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.672657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.672722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.673014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.673079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.673372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.673440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.673734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.673799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.674065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.674129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.674420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.674486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.674785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.674850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.675153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.675218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.675490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.675558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.675814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.675880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.676167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.676231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.676482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.676547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.676812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.676878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.677178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.677243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.677450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.677515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.677763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.677829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.678082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.678146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.678418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.678453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.678608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.678644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.678781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.678817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.678949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.678984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.679154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.679220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.679483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.679582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.679884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.679953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.680268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.680358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.680627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.680696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.680996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.681061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.681362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.681428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.681716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.681781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.682065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.682130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.682391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.682456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.682753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.682819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.683071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.459 [2024-12-09 04:16:35.683135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.459 qpair failed and we were unable to recover it. 00:26:07.459 [2024-12-09 04:16:35.683415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.683481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.683781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.683845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.684138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.684202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.684441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.684471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.684623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.684654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.684772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.684802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.684914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.684945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.685230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.685259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.685358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.685386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.685532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.685592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.685781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.685863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.686063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.686122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.686378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.686408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.686536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.686584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.686706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.686734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.686886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.686915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.687030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.687064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.687196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.687229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.687353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.687387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.687488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.687517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.687630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.687663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.687807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.687841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.688070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.688134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.688261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.688299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.688452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.688481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.688607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.688636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.688825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.688889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.689118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.689182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.689406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.689435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.689538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.689567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.689757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.689840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.690046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.690123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.690285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.690334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.690447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.690476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.690592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.690621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.690721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.460 [2024-12-09 04:16:35.690750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.460 qpair failed and we were unable to recover it. 00:26:07.460 [2024-12-09 04:16:35.690890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.690923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.691062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.691093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.691237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.691266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.691485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.691515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.691607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.691636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.691796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.691864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.692074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.692139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.692359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.692389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.692516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.692545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.692734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.692768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.692974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.693038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.693281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.693328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.693448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.693477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.693595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.693627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.693754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.693812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.694106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.694172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.694391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.694420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.694545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.694575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.694743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.694805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.695044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.695108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.695340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.695370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.695495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.695523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.695609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.695638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.695768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.695797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.696066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.696129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.696362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.696392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.696483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.696512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.696608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.696637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.696766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.696831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.697085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.697150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.697381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.697411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.697527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.697556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.697820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.697885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.698179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.698243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.698418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.698447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.461 [2024-12-09 04:16:35.698582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.461 [2024-12-09 04:16:35.698624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.461 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.698749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.698781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.698931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.699000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.699227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.699321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.699416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.699445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.699555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.699633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.699846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.699911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.700146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.700211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.700417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.700447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.700538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.700585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.700686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.700717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.700870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.700900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.701019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.701047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.701216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.701302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.701428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.701457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.701567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.701613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.701769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.701806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.701960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.702040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.702213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.702243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.702367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.702398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.702519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.702585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.702722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.702752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.702849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.702878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.702959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.702987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.703139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.703167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.703292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.703329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.703476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.703507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.703650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.703680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.703795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.703849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.703981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.704023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.704154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.704184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.704287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.704323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.704480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.704510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.704690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.704771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.704998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.705042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.705171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.705204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.705318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.462 [2024-12-09 04:16:35.705352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.462 qpair failed and we were unable to recover it. 00:26:07.462 [2024-12-09 04:16:35.705466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.705505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.705718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.705788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.706090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.706121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.706250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.706294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.706463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.706502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.706653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.706735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.706981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.707033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.707443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.707477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.707702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.707760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.707954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.708015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.708175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.708205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.708410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.708440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.708699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.708730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.708949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.709003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.709178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.709208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.709341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.709371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.709474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.709509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.709628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.709664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.709891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.709961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.710127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.710160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.710301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.710335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.710478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.710527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.710715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.710784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.710913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.710969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.711099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.711127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.711228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.711267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.711380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.711409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.711523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.711552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.711733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.711765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.711888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.711917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.712065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.712095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.712226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.712263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.712373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.712404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.712510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.712540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.712694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.712741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.712862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.712898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.463 [2024-12-09 04:16:35.713017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.463 [2024-12-09 04:16:35.713046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.463 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.713195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.713225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.713459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.713495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.713736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.713837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.714069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.714102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.714235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.714263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.714391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.714426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.714541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.714593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.714755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.714792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.715050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.715120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.715313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.715344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.715452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.715482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.715645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.715690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.715826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.715859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.716045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.716080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.716215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.716247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.716391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.716419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.716595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.716629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.716955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.717035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.717179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.717212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.717344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.717375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.717485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.717518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.717681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.717735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.718055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.718122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.718373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.718404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.718498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.718537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.718714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.718752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.718976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.719040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.719190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.719222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.719337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.719366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.719490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.719518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.719657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.719705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.719863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.719935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.720044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.720075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.720197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.720224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.720346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.720382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.720481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.720509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.720636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.720667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.720826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.720859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.720991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.464 [2024-12-09 04:16:35.721037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.464 qpair failed and we were unable to recover it. 00:26:07.464 [2024-12-09 04:16:35.721220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.721250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.721392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.721422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.721574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.721602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.721721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.721767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.721873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.721905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.722039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.722072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.722244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.722281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.722425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.722458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.722568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.722599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.722719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.722751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.722914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.722948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.723150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.723223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.723344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.723374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.723527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.723566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.723675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.723708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.723921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.723988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.724236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.724267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.724427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.724457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.724610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.724643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.724830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.724881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.725039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.725074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.725241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.725277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.725383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.725438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.725603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.725653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.725832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.725863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.726055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.726107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.726228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.726260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.726414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.726462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.726637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.726669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.726892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.726945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.727097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.727127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.727251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.727291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.727413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.727443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.727594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.727624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.727725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.727753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.727877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.727907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.728069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.728118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.728249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.728296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.728469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.728499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.465 [2024-12-09 04:16:35.728583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.465 [2024-12-09 04:16:35.728613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.465 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.728796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.728832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.728999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.729029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.729152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.729183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.729330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.729365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.729508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.729555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.729794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.729824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.729953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.729982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.730139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.730187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.730302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.730344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.730448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.730477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.730599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.730629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.730771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.730816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.730937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.730977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.731137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.731176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.731315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.731349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.731475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.731505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.731635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.731666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.731787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.731817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.731978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.732009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.732136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.732169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.732279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.732307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.732461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.732490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.732644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.732698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.732824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.732871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.732976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.733003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.733142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.733173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.733298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.733328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.733454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.733483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.733609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.733639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.733762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.733791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.733888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.733917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.734042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.734072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.734177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.734206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.734302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.734341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.734495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.734542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.734660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.734690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.734787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.734814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.734934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.466 [2024-12-09 04:16:35.734965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.466 qpair failed and we were unable to recover it. 00:26:07.466 [2024-12-09 04:16:35.735095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.735124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.735240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.735269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.735439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.735469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.735616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.735645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.735764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.735791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.735915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.735944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.736038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.736068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.736210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.736250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.736419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.736468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.736587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.736632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.736797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.736841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.736968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.736999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.737129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.737177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.737300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.737337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.737471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.737503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.737645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.737678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.737788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.737819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.737984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.738016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.738157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.738204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.738315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.738343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.738441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.738470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.738587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.738616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.738755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.738787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.738875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.738905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.739038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.739089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.739253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.739306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.739444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.739477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.739633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.739665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.739772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.739806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.739935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.739988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.740103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.740135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.740248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.740286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.740401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.740431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.740586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.740618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.740784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.740823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.740963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.467 [2024-12-09 04:16:35.740994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.467 qpair failed and we were unable to recover it. 00:26:07.467 [2024-12-09 04:16:35.741160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.741191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.741311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.741344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.741481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.741510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.741680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.741712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.741871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.741903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.742030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.742061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.742218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.742249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.742381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.742411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.742534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.742579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.742741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.742788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.742909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.742954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.743059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.743087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.743193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.743223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.743383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.743414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.743506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.743541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.743651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.743687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.743810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.743840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.743970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.744000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.744107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.744139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.744284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.744324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.744454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.744484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.744651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.744702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.744844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.744891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.745074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.745105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.745224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.745255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.745426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.745456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.745570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.745603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.745761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.745792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.745918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.745949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.746097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.746128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.746254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.746327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.746420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.746452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.746597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.746628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.746754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.746784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.746881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.746910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.747038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.747068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.747189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.747224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.747364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.747396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.747526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.747556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.747651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.747680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.747784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.747828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.468 [2024-12-09 04:16:35.747961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.468 [2024-12-09 04:16:35.747992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.468 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.748113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.748148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.748248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.748286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.748435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.748463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.748557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.748584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.748704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.748732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.748823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.748850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.748975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.749005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.749157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.749186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.749292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.749325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.749448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.749477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.749600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.749633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.749779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.749808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.749936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.749966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.750065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.750092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.750231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.750260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.750395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.750429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.750520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.750564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.750691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.750720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.750841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.750872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.750995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.751025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.751154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.751183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.751294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.751325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.751486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.751516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.751609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.751637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.751748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.751776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.751875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.751905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.752054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.752084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.752177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.752205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.752354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.752382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.752503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.752542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.752635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.752663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.752755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.752779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.752886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.752912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.753007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.753032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.753139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.753165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.753250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.753283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.753381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.753412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.753494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.753521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.753638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.753664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.753778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.753805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.753911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.753943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.754050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.754077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.754199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.469 [2024-12-09 04:16:35.754228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.469 qpair failed and we were unable to recover it. 00:26:07.469 [2024-12-09 04:16:35.754327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.754354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.754494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.754521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.754635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.754663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.754777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.754809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.754907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.754935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.755049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.755076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.755157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.755182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.755299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.755327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.755411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.755437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.755551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.755579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.755663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.755695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.755779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.755805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.755919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.755946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.756060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.756088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.756203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.756230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.756361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.756389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.756510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.756544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.756684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.756714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.756828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.756856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.756999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.757026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.757141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.757168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.757292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.757326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.757451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.757478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.757598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.757627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.757719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.757748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.757865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.757892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.757976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.758002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.758116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.758146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.758266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.758301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.758446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.758472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.758613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.758640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.758757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.758783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.758890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.758917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.759030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.759056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.759175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.759202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.759320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.759347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.759433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.759457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.759593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.759623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.759705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.759729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.759812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.759837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.759920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.759945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.760026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.760051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.760160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.760186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.760308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.760334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.470 [2024-12-09 04:16:35.760441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.470 [2024-12-09 04:16:35.760467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.470 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.760572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.760597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.760733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.760759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.760874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.760900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.760984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.761008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.761083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.761107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.761221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.761247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.761400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.761427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.761566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.761592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.761677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.761702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.761794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.761818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.761904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.761929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.762042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.762068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.762155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.762180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.762293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.762319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.762394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.762419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.762538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.762563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.762679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.762706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.762820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.762846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.763003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.763045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.763179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.763209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.763330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.763359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.763470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.763496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.763612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.763639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.763747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.763775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.763898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.763925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.764009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.764034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.764151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.764178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.764253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.764285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.764400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.764426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.764522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.764550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.764639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.764666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.764781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.764809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.764951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.764983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.765126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.765153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.765242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.765268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.765365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.765396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.765488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.765513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.765662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.765689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.765832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.765859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.765942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.765969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.766058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.766089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.471 qpair failed and we were unable to recover it. 00:26:07.471 [2024-12-09 04:16:35.766199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.471 [2024-12-09 04:16:35.766227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.766357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.766391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.766476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.766501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.766645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.766672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.766756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.766782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.766874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.766904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.767012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.767039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.767155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.767182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.767289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.767315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.767393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.767417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.767555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.767580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.767719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.767746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.767831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.767855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.767973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.767998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.768079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.768103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.768216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.768242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.768362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.768388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.768467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.768492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.768604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.768630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.768737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.768764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.768871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.768897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.769015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.769042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.769184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.769211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.769300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.769325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.769440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.769467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.769554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.769578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.769689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.769715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.769830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.769856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.769940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.769965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.472 [2024-12-09 04:16:35.770079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.472 [2024-12-09 04:16:35.770105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.472 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.770211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.770238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.770384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.770415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.770498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.770523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.770700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.770767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.771035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.771066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.771318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.771346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.771444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.771470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.771590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.771617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.771757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.771784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.771930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.771996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.772251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.772297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.772407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.772432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.772570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.772596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.772733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.772776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.772890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.772962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.773231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.773260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.773427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.773456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.773572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.773607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.773873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.773906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.774217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.774323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.774472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.774498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.774624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.774651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.774769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.774796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.775036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.775071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.775213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.775248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.775422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.775449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.775567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.775595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.775784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.775851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.776173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.776240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.776400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.776428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.776545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.776572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.776835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.776902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.777147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.777213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.777414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.777442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.777557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.777585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.777702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.777729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.777823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.777910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.473 qpair failed and we were unable to recover it. 00:26:07.473 [2024-12-09 04:16:35.778140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.473 [2024-12-09 04:16:35.778206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.778433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.778461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.778552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.778607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.778804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.778872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.779173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.779251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.779427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.779457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.779601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.779644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.779793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.779833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.780020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.780098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.780374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.780402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.780501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.780532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.780623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.780649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.780791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.780819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.780996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.781062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.781329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.781358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.781443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.781467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.781549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.781576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.781900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.781935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.782142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.782220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.782428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.782455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.782553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.782580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.782662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.782687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.782771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.782796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.782907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.782998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.783267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.783302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.783429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.783463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.783542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.783566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.783705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.783732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.783834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.783907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.784096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.784167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.784401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.784429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.784554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.784601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.784738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.784773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.785006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.785072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.785305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.785359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.785457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.785482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.785603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.785629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.474 [2024-12-09 04:16:35.785747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.474 [2024-12-09 04:16:35.785812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.474 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.786043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.786070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.786358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.786386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.786495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.786525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.786616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.786640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.786769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.786836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.787090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.787161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.787414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.787446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.787568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.787613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.787790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.787839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.788019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.788084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.788329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.788357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.788446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.788470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.788615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.788651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.788873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.788940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.789164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.789238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.789444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.789471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.789669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.789697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.789845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.789880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.790052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.790115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.790399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.790435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.790580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.790615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.790708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.790749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.790918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.790950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.791058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.791083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.791208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.791249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.791527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.791601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.791862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.791893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.792019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.792046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.792166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.792194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.792343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.792375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.792500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.792528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.792633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.792661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.792781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.792810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.793033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.793100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.793323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.793351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.793440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.475 [2024-12-09 04:16:35.793471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.475 qpair failed and we were unable to recover it. 00:26:07.475 [2024-12-09 04:16:35.793555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.793581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.793727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.793755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.793948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.794015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.794186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.794247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.794382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.794424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.794696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.794736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.794857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.794895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.795153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.795188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.795324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.795361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.795504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.795539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.795683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.795750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.795869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.795907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.796074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.796130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.796315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.796356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.796470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.796510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.796774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.796809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.796950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.796985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.797148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.797188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.797444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.797486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.797657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.797696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.797901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.797969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.798226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.798262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.798420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.798457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.798558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.798591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.798685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.798714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.798864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.798893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.799050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.799094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.799243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.799284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.799450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.799484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.799660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.799730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.799945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.800012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.800258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.800301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.476 [2024-12-09 04:16:35.800435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.476 [2024-12-09 04:16:35.800468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.476 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.800622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.800657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.800816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.800872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.800988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.801023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.801151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.801183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.801348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.801413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.801505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.801534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.801627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.801656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.801798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.801830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.801954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.801986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.802107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1826f30 is same with the state(6) to be set 00:26:07.477 [2024-12-09 04:16:35.802315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.802360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.802516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.802547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.802636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.802665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.802900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.802938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.803081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.803117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.803283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.803318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.803427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.803463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.803594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.803660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.803885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.803966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.804158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.804190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.804331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.804364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.804465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.804502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.804657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.804693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.804911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.804946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.805067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.805097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.805264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.805307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.805409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.805463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.805583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.805612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.805718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.805753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.805918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.806031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.806305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.806342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.806474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.806514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.806759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.806817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.806996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.807025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.807146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.807175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.807322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.807361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.477 [2024-12-09 04:16:35.807537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.477 [2024-12-09 04:16:35.807573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.477 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.807719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.807772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.807896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.807925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.808107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.808151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.808248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.808285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.808388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.808417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.808586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.808619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.808793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.808861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.809067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.809134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.809382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.809426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.809544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.809586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.809761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.809799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.809939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.809970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.810084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.810117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.810241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.810283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.810418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.810456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.810582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.810629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.810721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.810749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.810873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.810903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.811042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.811089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.811233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.811262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.811416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.811449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.811684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.811752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.812047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.812113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.812359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.812393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.812571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.812607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.812710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.812757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.812879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.812908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.813102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.813167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.813400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.813435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.813531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.813559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.813653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.813685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.813806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.813835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.813987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.814023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.814291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.814324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.814492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.814533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.478 qpair failed and we were unable to recover it. 00:26:07.478 [2024-12-09 04:16:35.814676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.478 [2024-12-09 04:16:35.814778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.815083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.815161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.815380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.815427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.815572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.815601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.815732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.815764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.815909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.815940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.816120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.816196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.816384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.816418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.816533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.816565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.816753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.816788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.816966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.817004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.817125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.817158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.817260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.817310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.817456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.817489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.817601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.817637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.817852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.817931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.818159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.818192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.818326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.818358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.818496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.818528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.818645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.818686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.818972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.819040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.819312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.819366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.819525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.819558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.819691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.819724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.819932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.819964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.820095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.820127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.820289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.820338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.820454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.820488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.820673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.820739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.821008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.821077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.821335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.821368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.821462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.821492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.821736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.821768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.822027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.822094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.822294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.822328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.822467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.822502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.822637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.822669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.822768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.479 [2024-12-09 04:16:35.822799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.479 qpair failed and we were unable to recover it. 00:26:07.479 [2024-12-09 04:16:35.822928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.822959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.823082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.823167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.823381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.823414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.823544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.823578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.823775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.823842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.824092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.824158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.824402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.824434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.824619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.824685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.824980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.825047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.825299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.825332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.825457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.825490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.825620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.825653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.825812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.825844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.825951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.825985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.826119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.826152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.826288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.826321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.826421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.826453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.826580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.826613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.826745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.826778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.826915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.826952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.827091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.827124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.827218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.827250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.827392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.827425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.827537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.827570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.827706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.827737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.827903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.827935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.828102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.828134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.828258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.828297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.828437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.828469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.828561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.828591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.828715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.828747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.480 qpair failed and we were unable to recover it. 00:26:07.480 [2024-12-09 04:16:35.828838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.480 [2024-12-09 04:16:35.828868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.828996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.829028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.829134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.829166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.829327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.829358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.829483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.829515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.829644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.829676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.829815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.829846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.829977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.830008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.830143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.830175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.830276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.830307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.830409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.830446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.830543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.830573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.830808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.830839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.830967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.830998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.831147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.831196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.831334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.831370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.831479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.831512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.831612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.831646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.831761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.831795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.831974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.832040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.832310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.832381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.832672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.832747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.833090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.833157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.833490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.833558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.833868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.833907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.834047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.834104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.834355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.834424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.834732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.834799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.835104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.835176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.835438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.835505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.835770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.835838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.836088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.836124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.836266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.836309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.836577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.836662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.836916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.836983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.837252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.837337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.837591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.837669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.837993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.481 [2024-12-09 04:16:35.838068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.481 qpair failed and we were unable to recover it. 00:26:07.481 [2024-12-09 04:16:35.838366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.838435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.838733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.838801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.839101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.839168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.839451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.839488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.839624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.839665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.839819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.839854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.840110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.840178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.840424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.840492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.840800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.840867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.841091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.841159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.841422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.841462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.841604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.841639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.841880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.841961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.842287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.842356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.842652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.842727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.843027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.843102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.843363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.843431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.843660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.843732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.843967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.844002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.844136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.844172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.844316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.844352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.844472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.844509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.844804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.844840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.845034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.845101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.845354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.845391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.845504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.845541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.845742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.845820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.846051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.846086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.846233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.846268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.846533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.846600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.846891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.846926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.847068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.847104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.847243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.847316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.847543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.847611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.847873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.847942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.848235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.848327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.848589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.848656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.482 [2024-12-09 04:16:35.848882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.482 [2024-12-09 04:16:35.848949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.482 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.849230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.849264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.849452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.849488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.849624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.849659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.849802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.849857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.850116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.850182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.850477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.850520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.850714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.850783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.851032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.851097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.851404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.851481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.851718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.851779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.851922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.851957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.852218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.852298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.852496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.852575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.852863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.852940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.853196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.853286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.853526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.853563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.853707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.853747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.854000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.854067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.854372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.854440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.854689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.854765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.855028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.855099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.855318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.855386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.855621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.855694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.855950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.856017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.856227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.856315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.856567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.856637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.856893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.856964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.857265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.857369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.857640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.857706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.857968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.858043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.858314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.858381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.858654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.858721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.859016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.859052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.859195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.859230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.859502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.859568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.859806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.483 [2024-12-09 04:16:35.859869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.483 qpair failed and we were unable to recover it. 00:26:07.483 [2024-12-09 04:16:35.860119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.860192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.860585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.860653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.860912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.860980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.861202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.861291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.861589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.861665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.861968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.862044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.862284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.862328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.862449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.862484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.862630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.862670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.862806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.862841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.863120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.863190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.863445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.863480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.863625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.863659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.863768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.863802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.863936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.863975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.864118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.864157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.864314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.864364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.864472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.864506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.864613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.864653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.864812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.864851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.865012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.865045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.865151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.865189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.865319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.865353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.865527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.865560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.865696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.865731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.865872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.865905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.866036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.866068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.866211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.866247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.866386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.866419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.866533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.866578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.866709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.866741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.866896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.866929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.867069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.867102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.867231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.867264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.867425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.867456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.867594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.867629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.867753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.867783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.484 [2024-12-09 04:16:35.867915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.484 [2024-12-09 04:16:35.867946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.484 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.868037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.868067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.868195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.868225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.868348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.868379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.868529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.868560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.868715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.868745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.868860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.868902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.869085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.869130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.869265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.869302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.869442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.869471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.869610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.869640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.869806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.869836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.869962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.869992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.870117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.870146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.870236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.870265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.870438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.870468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.870588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.870619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.870745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.870776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.870937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.870967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.871097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.871128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.871300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.871340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.871490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.871531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.871715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.871748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.871856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.871887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.871979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.872010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.872115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.872146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.872291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.872330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.872475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.872508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.872676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.872706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.872799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.872834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.872966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.873000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.873138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.873182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.873346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.873376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.485 [2024-12-09 04:16:35.873471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.485 [2024-12-09 04:16:35.873499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.485 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.873598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.873626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.873752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.873779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.873872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.873898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.874017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.874045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.874137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.874167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.874258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.874290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.874450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.874477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.874562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.874589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.874667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.874694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.874804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.874832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.874953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.874981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.875057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.875084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.875214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.875243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.875369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.875397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.875507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.875543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.875661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.875687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.875775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.875822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.876089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.876123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.876236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.876266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.876403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.876430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.876505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.876556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.876658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.876692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.876833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.876869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.877007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.877036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.877142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.877174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.877269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.877330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.877414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.877447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.877563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.877595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.877733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.877768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.877989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.878025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.878125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.878157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.878325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.878353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.878469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.878497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.878602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.878632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.878773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.878817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.878925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.878956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.879068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.879100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.486 qpair failed and we were unable to recover it. 00:26:07.486 [2024-12-09 04:16:35.879230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.486 [2024-12-09 04:16:35.879259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.879402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.879430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.879541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.879567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.879643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.879670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.879869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.879899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.880062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.880091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.880190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.880219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.880367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.880393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.880504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.880530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.880637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.880681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.880827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.880859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.881002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.881036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.881202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.881231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.881382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.881410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.881509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.881537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.881661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.881694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.881814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.881849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.882065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.882094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.882265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.882336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.882427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.882465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.882551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.882578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.882654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.882681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.882776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.882814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.882952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.883009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.883229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.883263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.883413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.883440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.883535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.883569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.883652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.883680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.883831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.883889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.884019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.884063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.884161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.884191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.884332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.884371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.884463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.884491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.884587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.884615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.884710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.884737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.884883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.884936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.885058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.885088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.885245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.885283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.487 qpair failed and we were unable to recover it. 00:26:07.487 [2024-12-09 04:16:35.885396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.487 [2024-12-09 04:16:35.885423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.885509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.885537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.885668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.885703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.885864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.885895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.886029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.886060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.886149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.886181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.886332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.886359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.886449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.886475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.886633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.886659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.886807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.886834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.886947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.886995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.887098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.887127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.887245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.887279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.887425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.887452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.887564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.887590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.887799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.887864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.888039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.888110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.888202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.888231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.888376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.888403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.888525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.888551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.888679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.888732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.888982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.889046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.889232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.889295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.889402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.889429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.889509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.889536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.889679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.889711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.889905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.889934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.890203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.890231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.890382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.890409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.890525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.890571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.890684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.890717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.890954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.891017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.891172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.891201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.891347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.891374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.891471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.891497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.891637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.891663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.891782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.891833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.892034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.892104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.488 qpair failed and we were unable to recover it. 00:26:07.488 [2024-12-09 04:16:35.892281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.488 [2024-12-09 04:16:35.892327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.892446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.892473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.892566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.892594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.892685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.892739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.892991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.893022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.893179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.893208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.893349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.893376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.893499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.893527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.893617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.893702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.893832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.893892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.894058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.894093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.894268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.894303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.894408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.894434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.894509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.894535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.894612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.894638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.894750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.894839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.895068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.895097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.895218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.895247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.895490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.895530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.895874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.895911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.896099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.896128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.896224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.896254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.896379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.896410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.896540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.896586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.896733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.896798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.896957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.897006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.897126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.897170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.897250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.897283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.897404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.897432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.897519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.897561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.897691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.897722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.897837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.897868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.897961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.897991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.898082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.898113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.898269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.898325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.898445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.898472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.898582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.898614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.898795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.898825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.489 [2024-12-09 04:16:35.898918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.489 [2024-12-09 04:16:35.898946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.489 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.899031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.899060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.899176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.899218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.899304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.899332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.899421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.899448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.899565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.899591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.899667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.899693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.899840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.899931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.900055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.900085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.900177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.900221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.900332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.900359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.900463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.900489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.900700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.900726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.900835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.900862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.900950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.901013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.901174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.901203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.901342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.901369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.901488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.901514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.901625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.901651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.901746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.901792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.901985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.902014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.902256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.902291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.902388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.902416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.902501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.902527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.902643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.902670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.902827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.902901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.903061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.903092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.903196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.903228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.903382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.903410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.903496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.903523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.903663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.903728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.903896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.903957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.904170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.904220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.904354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.490 [2024-12-09 04:16:35.904382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.490 qpair failed and we were unable to recover it. 00:26:07.490 [2024-12-09 04:16:35.904496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.904523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.904730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.904774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.904965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.904994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.905091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.905120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.905234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.905269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.905375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.905404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.905559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.905589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.905736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.905782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.905903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.905948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.906057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.906088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.906222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.906248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.906363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.906391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.906500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.906527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.906634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.906660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.906830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.906894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.907090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.907120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.907244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.907280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.907400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.907429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.907541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.907574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.907704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.907737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.907865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.907910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.908177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.908210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.908394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.908425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.908509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.908537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.908645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.908673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.908781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.908807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.908891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.908919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.909031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.909079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.909349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.909380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.909482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.909511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.909689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.909723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.909817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.909857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.910091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.910158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.910361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.910394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.910491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.910534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.910678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.910704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.910864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.910919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.911134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.491 [2024-12-09 04:16:35.911163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.491 qpair failed and we were unable to recover it. 00:26:07.491 [2024-12-09 04:16:35.911290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.911320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.911433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.911477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.911598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.911627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.911734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.911761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.911847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.911875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.911986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.912029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.912127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.912156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.912247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.912305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.912400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.912427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.912584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.912614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.912761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.912790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.912986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.913050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.913326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.913356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.913478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.913506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.913672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.913706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.913877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.913955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.914224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.914257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.914410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.914441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.914531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.914561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.914728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.914783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.914926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.914979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.915100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.915131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.915285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.915315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.915498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.915553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.915715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.915772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.915901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.915961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.916084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.916113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.916192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.916221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.916424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.916458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.916593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.916644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.916731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.916758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.916874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.916901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.917018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.917046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.917181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.917211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.917354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.917402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.917540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.917590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.917684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.917712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.917850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.492 [2024-12-09 04:16:35.917883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.492 qpair failed and we were unable to recover it. 00:26:07.492 [2024-12-09 04:16:35.918034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.918063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.918184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.918214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.918377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.918407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.918526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.918560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.918784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.918848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.919143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.919177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.919296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.919324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.919450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.919479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.919712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.919746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.919864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.919916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.920130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.920163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.920318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.920345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.920458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.920484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.920598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.920628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.920792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.920826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.921006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.921070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.921320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.921349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.921521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.921547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.921636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.921661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.921859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.921885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.921993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.922019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.922179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.922212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.922363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.922413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.922505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.922532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.922645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.922671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.922802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.922834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.922965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.922998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.923150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.923194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.923336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.923381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.923499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.923526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.923737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.923796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.924025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.924052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.924200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.924229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.924389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.924419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.924535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.924564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.924689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.924718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.924832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.924882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.924998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.925025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.493 qpair failed and we were unable to recover it. 00:26:07.493 [2024-12-09 04:16:35.925147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.493 [2024-12-09 04:16:35.925173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.925348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.925375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.925449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.925475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.925561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.925588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.925673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.925700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.925861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.925890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.926019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.926048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.926158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.926190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.926321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.926351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.926477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.926506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.926693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.926760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.927003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.927070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.927350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.927377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.927467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.927493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.927629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.927656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.927762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.927796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.927952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.928001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.928122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.928152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.928283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.928313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.928407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.928437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.928646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.928693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.928838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.928888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.929033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.929066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.929229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.929262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.929380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.929409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.929587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.929645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.929813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.929869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.930102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.930153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.930305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.930348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.930460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.930487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.930601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.930627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.930733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.930759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.930902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.930928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.931062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.931091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.494 [2024-12-09 04:16:35.931185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.494 [2024-12-09 04:16:35.931216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.494 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.931353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.931383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.931504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.931533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.931716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.931782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.932076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.932141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.932394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.932424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.932666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.932730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.932927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.932973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.933186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.933220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.933344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.933374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.933495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.933526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.933669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.933702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.933887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.933919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.934156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.934222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.934461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.934488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.934630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.934656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.934762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.934810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.935029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.935094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.935377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.935421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.935587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.935619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.935787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.935824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.935985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.936032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.936151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.936178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.936296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.936324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.936413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.936439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.936552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.936579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.936663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.936708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.936795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.936826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.936936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.936965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.937047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.937076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.937175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.937204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.937356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.937390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.937543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.937621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.937896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.937928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.938062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.938094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.938196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.938229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.938368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.495 [2024-12-09 04:16:35.938398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.495 qpair failed and we were unable to recover it. 00:26:07.495 [2024-12-09 04:16:35.938512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.938540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.938831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.938857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.938994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.939020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.939131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.939158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.939311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.939340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.939464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.939502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.939608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.939658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.939837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.939868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.939988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.940020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.940116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.940147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.940301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.940333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.940490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.940520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.940616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.940645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.940794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.940823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.941033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.941098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.941353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.941383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.941503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.941532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.941739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.941768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.941885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.941914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.942035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.942061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.942211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.942240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.942384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.942411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.942534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.942561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.942675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.942703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.942915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.942978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.943229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.943313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.943453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.943497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.943638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.943664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.943741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.943786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.944055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.944122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.944338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.944370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.944498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.944527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.496 qpair failed and we were unable to recover it. 00:26:07.496 [2024-12-09 04:16:35.944730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.496 [2024-12-09 04:16:35.944756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.944867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.944893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.945008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.945035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.945132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.945191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.945324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.945357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.945463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.945493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.945640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.945701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.945840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.945887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.946086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.946112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.946228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.946254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.946344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.946370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.946560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.946590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.946716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.946745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.946868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.946897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.946985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.947015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.947136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.947167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.947308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.947338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.947443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.947472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.947603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.947631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.947767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.947795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.947924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.947951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.948085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.948111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.948187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.948214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.948360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.948390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.948589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.948653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.948876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.948941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.949188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.949253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.949477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.949519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.949633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.949661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.949865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.949897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.950029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.950062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.950153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.950186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.950337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.950364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.950504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.950530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.950664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.950708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.950821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.497 [2024-12-09 04:16:35.950847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.497 qpair failed and we were unable to recover it. 00:26:07.497 [2024-12-09 04:16:35.950933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.950987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.951229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.951255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.951376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.951403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.951560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.951589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.951676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.951705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.951851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.951884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.952026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.952053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.952168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.952193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.952333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.952363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.952519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.952546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.952655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.952682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.952815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.952879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.953072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.953099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.953182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.953208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.953328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.953355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.953451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.953477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.953600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.953634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.953741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.953777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.954012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.954076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.954282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.954311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.954435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.954465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.954568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.954607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.954751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.954783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.954936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.954966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.955191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.955217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.955301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.955327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.955425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.955455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.955640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.955705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.956009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.956041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.956183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.956227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.956315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.956343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.956462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.956488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.956652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.956716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.957004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.957070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.957379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.498 [2024-12-09 04:16:35.957408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.498 qpair failed and we were unable to recover it. 00:26:07.498 [2024-12-09 04:16:35.957519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.957548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.957841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.957905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.958120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.958186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.958422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.958452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.958575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.958604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.958693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.958759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.959014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.959047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.959186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.959216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.959370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.959400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.959568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.959595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.959713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.959739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.959962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.960028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.960320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.960349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.960437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.960471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.960583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.960609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.960717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.960743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.960831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.960875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.960971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.961018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.961125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.961151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.961304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.961359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.961500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.961530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.961728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.961791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.962012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.962077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.962425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.962455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.962580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.962609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.962792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.962859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.963107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.963134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.963253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.963287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.963403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.963430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.963540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.963567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.963682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.963708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.963868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.963894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.964033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.499 [2024-12-09 04:16:35.964060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.499 qpair failed and we were unable to recover it. 00:26:07.499 [2024-12-09 04:16:35.964237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.964311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.964453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.964497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.964587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.964615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.964742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.964772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.964918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.964947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.965213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.965239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.965360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.965387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.965491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.965522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.965665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.965724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.965939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.965987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.966197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.966226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.966387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.966417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.966572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.966653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.966917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.966976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.967221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.967247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.967402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.967429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.967534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.967563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.967663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.967692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.967782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.967811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.967896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.967924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.968018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.968044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.968153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.968248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.968586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.968659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.968961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.969029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.969343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.969407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.969720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.969787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.970085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.970152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.970427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.970491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.970786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.970852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.971165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.971232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.971528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.971562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.971673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.971717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.971858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.971885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.972107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.972134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.972275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.972308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.972453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.972487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.972703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.972770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.500 qpair failed and we were unable to recover it. 00:26:07.500 [2024-12-09 04:16:35.973082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.500 [2024-12-09 04:16:35.973149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.973420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.973468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.973584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.973611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.973775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.973808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.973970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.974012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.974097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.974124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.974241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.974267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.974386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.974447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.974688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.974749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.974966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.975015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.975131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.975159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.975287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.975315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.975430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.975457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.975623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.975688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.975996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.976061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.976313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.976392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.976637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.976663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.976804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.976830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.977071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.977137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.977428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.977491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.977764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.977825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.978133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.978160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.978282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.978310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.978448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.978482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.978685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.978713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.978830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.978857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.979017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.979082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.979405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.979467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.979769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.979795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.979920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.979945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.980112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.980181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.980517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.980544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.980657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.980685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.980792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.980819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.981000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.981065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.981305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.981368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.981571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.501 [2024-12-09 04:16:35.981616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.501 qpair failed and we were unable to recover it. 00:26:07.501 [2024-12-09 04:16:35.981708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.502 [2024-12-09 04:16:35.981740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.502 qpair failed and we were unable to recover it. 00:26:07.502 [2024-12-09 04:16:35.981858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.502 [2024-12-09 04:16:35.981885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.502 qpair failed and we were unable to recover it. 00:26:07.502 [2024-12-09 04:16:35.982031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.502 [2024-12-09 04:16:35.982057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.502 qpair failed and we were unable to recover it. 00:26:07.502 [2024-12-09 04:16:35.982160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.502 [2024-12-09 04:16:35.982186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.502 qpair failed and we were unable to recover it. 00:26:07.502 [2024-12-09 04:16:35.982323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.502 [2024-12-09 04:16:35.982391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.502 qpair failed and we were unable to recover it. 00:26:07.502 [2024-12-09 04:16:35.982696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.502 [2024-12-09 04:16:35.982762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.502 qpair failed and we were unable to recover it. 00:26:07.502 [2024-12-09 04:16:35.983061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.502 [2024-12-09 04:16:35.983126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.502 qpair failed and we were unable to recover it. 00:26:07.502 [2024-12-09 04:16:35.983349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.502 [2024-12-09 04:16:35.983397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.502 qpair failed and we were unable to recover it. 00:26:07.502 [2024-12-09 04:16:35.983540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.502 [2024-12-09 04:16:35.983567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.502 qpair failed and we were unable to recover it. 00:26:07.502 [2024-12-09 04:16:35.983798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.502 [2024-12-09 04:16:35.983864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.502 qpair failed and we were unable to recover it. 00:26:07.502 [2024-12-09 04:16:35.984110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.502 [2024-12-09 04:16:35.984175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.502 qpair failed and we were unable to recover it. 00:26:07.502 [2024-12-09 04:16:35.984435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.502 [2024-12-09 04:16:35.984464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.502 qpair failed and we were unable to recover it. 00:26:07.502 [2024-12-09 04:16:35.984549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.502 [2024-12-09 04:16:35.984575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.502 qpair failed and we were unable to recover it. 00:26:07.502 [2024-12-09 04:16:35.984757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.502 [2024-12-09 04:16:35.984784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.502 qpair failed and we were unable to recover it. 00:26:07.502 [2024-12-09 04:16:35.985037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.502 [2024-12-09 04:16:35.985104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.502 qpair failed and we were unable to recover it. 00:26:07.502 [2024-12-09 04:16:35.985323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.502 [2024-12-09 04:16:35.985389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.502 qpair failed and we were unable to recover it. 00:26:07.502 [2024-12-09 04:16:35.985695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.502 [2024-12-09 04:16:35.985722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.502 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.985807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.985832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.986016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.986082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.986352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.986419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.986715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.986780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.987031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.987097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.987392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.987460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.987721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.987754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.987848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.987882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.988029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.988056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.988155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.988182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.988316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.988414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.988691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.988726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.988838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.988873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.989061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.989086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.989209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.989235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.989462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.989529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.989754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.989818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.990063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.990127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.990416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.990482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.990667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.990732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.990959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.990992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.991094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.991127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.991386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.783 [2024-12-09 04:16:35.991417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.783 qpair failed and we were unable to recover it. 00:26:07.783 [2024-12-09 04:16:35.991540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.991571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.991714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.991747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.991855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.991889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.991996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.992023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.992115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.992173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.992425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.992491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.992750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.992815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.993080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.993107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.993222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.993249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.993404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.993471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.993761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.993827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.994089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.994154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.994362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.994429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.994674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.994709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.994897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.994924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.995009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.995036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.995112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.995139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.995239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.995285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.995470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.995540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.995744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.995810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.996106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.996170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.996445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.996512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.996767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.996793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.996904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.996930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.997103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.997129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.997280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.997308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.997516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.997581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.997878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.997942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.998194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.998262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.998586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.998651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.998954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.999018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.999318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.999385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.999633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.999660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.999748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:35.999775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:35.999936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:36.000001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:36.000286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:36.000313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:36.000421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:36.000445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:36.000529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:36.000556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:36.000675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:36.000701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:36.000833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:36.000898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:36.001182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:36.001260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:36.001569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:36.001596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:36.001687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.784 [2024-12-09 04:16:36.001714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.784 qpair failed and we were unable to recover it. 00:26:07.784 [2024-12-09 04:16:36.001808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.001850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.002081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.002145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.002433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.002499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.002687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.002752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.003035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.003098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.003364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.003398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.003533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.003566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.003739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.003765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.003876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.003903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.004096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.004160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.004425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.004491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.004795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.004860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.005130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.005194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.005475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.005502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.005617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.005644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.005843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.005869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.005977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.006003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.006193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.006258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.006530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.006594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.006851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.006916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.007104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.007169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.007445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.007473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.007591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.007618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.007733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.007759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.007958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.007988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.008102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.008130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.008303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.008378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.008606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.008671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.008931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.008997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.009213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.009246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.009449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.009516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.009815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.009879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.010128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.010191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.010469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.010503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.010630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.010663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.010797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.010829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.011038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.011102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.011399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.011478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.011735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.011761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.011875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.011902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.012021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.012047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.012186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.012212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.785 qpair failed and we were unable to recover it. 00:26:07.785 [2024-12-09 04:16:36.012380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.785 [2024-12-09 04:16:36.012447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.012680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.012745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.012990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.013016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.013157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.013183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.013259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.013316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.013619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.013683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.013989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.014053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.014340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.014406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.014653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.014679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.014766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.014792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.014878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.014923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.015149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.015203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.015472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.015538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.015800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.015827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.015939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.015966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.016203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.016267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.016535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.016602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.016832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.016865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.017026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.017058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.017195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.017259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.017578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.017605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.017722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.017746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.017909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.017974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.018225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.018281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.018405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.018431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.018547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.018581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.018692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.018725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.018867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.018901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.019004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.019037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.019191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.019217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.019329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.019356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.019467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.019501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.019645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.019671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.019800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.019835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.019936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.019971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.020118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.020175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.020436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.020472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.020619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.020654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.020823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.020857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.021109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.021173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.021390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.021424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.021579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.021612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.021746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.786 [2024-12-09 04:16:36.021779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.786 qpair failed and we were unable to recover it. 00:26:07.786 [2024-12-09 04:16:36.021892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.021924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.022113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.022177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.022396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.022430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.022552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.022585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.022713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.022745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.022910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.022943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.023113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.023145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.023390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.023426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.023529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.023578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.023741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.023774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.023987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.024021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.024224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.024292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.024428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.024461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.024624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.024656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.024891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.024955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.025211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.025293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.025486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.025519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.025654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.025688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.025901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.025967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.026205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.026297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.026457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.026490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.026638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.026672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.026837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.026901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.027190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.027253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.027480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.027515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.027623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.027671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.027807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.027840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.028077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.028141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.028387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.028421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.028572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.028642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.028907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.028940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.029051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.029084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.029180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.029213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.029356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.029390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.029547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.029581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.029722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.029771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.029908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.029941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.030099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.030132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.030346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.030381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.030491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.787 [2024-12-09 04:16:36.030525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.787 qpair failed and we were unable to recover it. 00:26:07.787 [2024-12-09 04:16:36.030655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.030689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.030933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.030997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.031251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.031339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.031484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.031518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.031725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.031789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.032086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.032150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.032416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.032451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.032597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.032631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.032860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.032923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.033168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.033235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.033405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.033440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.033563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.033597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.033736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.033797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.033998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.034064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.034349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.034383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.034500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.034534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.034663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.034698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.034903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.034937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.035132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.035198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.035387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.035428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.035543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.035577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.035710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.035779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.036020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.036085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.036355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.036389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.036534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.036569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.036792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.036856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.037137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.037202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.037474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.037509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.037717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.037781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.038067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.038129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.038425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.038492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.038781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.038815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.038955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.038990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.039206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.039286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.039523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.039588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.039834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.039898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.040097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.040161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.040411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.040476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.040718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.040783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.041061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.041127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.041312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.041378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.041666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.041730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.788 [2024-12-09 04:16:36.041917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.788 [2024-12-09 04:16:36.041981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.788 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.042269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.042352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.042689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.042753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.043015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.043078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.043379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.043433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.043738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.043803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.044095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.044159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.044412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.044477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.044694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.044759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.044972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.045035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.045316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.045382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.045696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.045761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.046006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.046073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.046367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.046433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.046724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.046788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.046970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.047034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.047321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.047387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.047635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.047710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.047956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.048023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.048291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.048357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.048649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.048714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.048923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.048987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.049259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.049299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.049508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.049543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.049753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.049787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.049953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.049986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.050260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.050353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.050606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.050673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.050927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.050992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.051261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.051345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.051657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.051722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.051970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.052034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.052325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.052392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.052592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.052656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.052956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.053020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.053259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.053339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.053600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.053664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.053934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.053968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.054110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.054144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.054245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.054311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.054499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.054566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.054818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.054885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.055173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.055236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.055515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.055580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.055877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.055943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.789 [2024-12-09 04:16:36.056238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.789 [2024-12-09 04:16:36.056319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.789 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.056545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.056609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.056842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.056907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.057215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.057297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.057586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.057650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.057902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.057965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.058217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.058313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.058506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.058571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.058760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.058825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.059114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.059177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.059389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.059457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.059695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.059761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.060050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.060126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.060387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.060452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.060692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.060756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.061030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.061093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.061324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.061390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.061644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.061709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.061948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.062015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.062306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.062372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.062681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.062746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.062994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.063058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.063307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.063373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.063627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.063691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.063953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.064017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.064234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.064316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.064592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.064656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.064934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.064998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.065248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.065333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.065628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.065692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.065933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.065998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.066216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.066311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.066576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.066643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.066902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.066967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.067261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.067346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.067631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.067697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.067939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.068003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.068244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.068336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.068591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.068657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.068923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.068987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.069288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.069354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.069662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.069727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.070012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.070077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.070361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.070428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.070732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.790 [2024-12-09 04:16:36.070796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.790 qpair failed and we were unable to recover it. 00:26:07.790 [2024-12-09 04:16:36.071099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.071163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.071433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.071498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.071778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.071842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.072026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.072090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.072388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.072453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.072699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.072764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.073054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.073119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.073391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.073467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.073736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.073800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.074097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.074163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.074429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.074495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.074787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.074851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.075157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.075221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.075535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.075600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.075796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.075861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.076112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.076177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.076454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.076520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.076769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.076834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.077119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.077184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.077486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.077551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.077734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.077798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.078063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.078128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.078353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.078420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.078679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.078744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.078998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.079065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.079322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.079389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.079608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.079675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.079968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.080031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.080250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.080331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.080573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.080639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.080886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.080950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.081164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.081227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.081460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.081527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.081813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.081876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.082140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.082206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.082473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.082540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.082829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.082894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.083189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.791 [2024-12-09 04:16:36.083253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.791 qpair failed and we were unable to recover it. 00:26:07.791 [2024-12-09 04:16:36.083518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.083583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.083869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.083933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.084173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.084238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.084497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.084562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.084801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.084865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.085128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.085192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.085508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.085574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.085770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.085835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.086078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.086143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.086446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.086523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.086824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.086889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.087173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.087237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.087502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.087567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.087860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.087923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.088158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.088223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.088449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.088514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.088765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.088829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.089070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.089133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.089385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.089450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.089738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.089801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.090010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.090073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.090303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.090370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.090660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.090723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.091023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.091088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.091338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.091412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.091700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.091764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.091982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.092046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.092336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.092402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.092695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.092760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.093049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.093112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.093357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.093421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.093717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.093782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.094035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.094102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.094400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.094466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.094757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.094821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.095114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.095177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.095508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.095576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.095870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.095934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.096185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.096249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.096556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.096621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.096808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.096872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.097119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.097183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.097492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.097557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.792 [2024-12-09 04:16:36.097808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.792 [2024-12-09 04:16:36.097876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.792 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.098128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.098194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.098483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.098550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.098844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.098908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.099205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.099270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.099544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.099608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.099858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.099935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.100232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.100314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.100604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.100669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.100930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.100994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.101303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.101368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.101665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.101729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.102008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.102074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.102296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.102363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.102650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.102715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.103003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.103067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.103317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.103382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.103673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.103738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.104033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.104097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.104296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.104363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.104634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.104699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.104989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.105054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.105315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.105379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.105632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.105695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.105944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.106007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.106321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.106386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.106674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.106738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.107033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.107097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.107392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.107457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.107702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.107766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.108049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.108114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.108393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.108459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.108651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.108716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.108943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.109009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.109300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.109365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.109589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.109653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.109895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.109957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.110172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.110236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.110549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.110613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.110912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.110976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.111179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.111245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.111539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.111603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.111852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.111917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.793 [2024-12-09 04:16:36.112106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.793 [2024-12-09 04:16:36.112170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.793 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.112470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.112535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.112750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.112817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.113048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.113123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.113421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.113487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.113681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.113745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.113932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.113998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.114247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.114330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.114619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.114683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.114929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.114993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.115270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.115346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.115587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.115654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.115945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.116009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.116249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.116327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.116571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.116637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.116895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.116960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.117244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.117327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.117639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.117704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.117990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.118053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.118305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.118373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.118558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.118622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.118914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.118978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.119268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.119352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.119556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.119619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.119869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.119932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.120131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.120199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.120506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.120570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.120815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.120882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.121177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.121242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.121522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.121586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.121833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.121899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.122147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.122212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.122405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.122470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.122710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.122774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.123037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.123102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.123408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.123474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.123718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.123784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.124073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.124136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.124388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.124453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.124743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.124807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.125102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.125165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.125469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.125535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.125830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.794 [2024-12-09 04:16:36.125893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.794 qpair failed and we were unable to recover it. 00:26:07.794 [2024-12-09 04:16:36.126183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.126258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.126546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.126611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.126904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.126967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.127287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.127353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.127642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.127706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.127949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.128013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.128302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.128367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.128615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.128680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.128875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.128942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.129239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.129338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.129629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.129694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.129940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.130004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.130256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.130339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.130628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.130692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.130992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.131056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.131352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.131418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.131710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.131774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.132060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.132124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.132360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.132426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.132675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.132739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.132995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.133058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.133244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.133325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.133611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.133676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.133924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.133988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.134294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.134359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.134655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.134720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.134961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.135025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.135338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.135404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.135656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.135721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.135969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.136034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.136306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.136372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.136575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.136639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.136927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.136990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.137320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.137386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.137648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.137713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.137953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.138017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.138238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.138315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.138542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.795 [2024-12-09 04:16:36.138606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.795 qpair failed and we were unable to recover it. 00:26:07.795 [2024-12-09 04:16:36.138851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.138918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.139215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.139297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.139541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.139616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.139862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.139926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.140124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.140188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.140492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.140557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.140801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.140867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.141119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.141183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.141453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.141519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.141769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.141833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.142113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.142178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.142449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.142517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.142761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.142827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.143037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.143104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.143336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.143403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.143610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.143675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.143983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.144048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.144298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.144364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.144577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.144640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.144938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.145002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.145258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.145347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.145624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.145689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.145932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.145997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.146301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.146366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.146637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.146701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.146994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.147059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.147350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.147416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.147673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.147737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.147922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.147990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.148304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.148371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.148627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.148692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.148955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.149020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.149263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.149357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.149613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.149679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.149992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.150057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.150344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.150411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.150706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.150769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.151072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.151136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.151327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.151396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.151617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.151687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.151953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.152021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.152311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.152378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.152658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.152733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.796 [2024-12-09 04:16:36.152987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.796 [2024-12-09 04:16:36.153053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.796 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.153305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.153373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.153672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.153736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.154011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.154075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.154362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.154430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.154682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.154748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.155035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.155100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.155321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.155389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.155644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.155708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.155992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.156057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.156316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.156382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.156637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.156702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.156969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.157034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.157327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.157395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.157698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.157762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.158052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.158118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.158375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.158446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.158738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.158802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.158992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.159059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.159342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.159408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.159700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.159764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.160030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.160094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.160336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.160401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.160689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.160755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.161053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.161118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.161416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.161482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.161791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.161855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.162100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.162168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.162481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.162550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.162825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.162890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.163153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.163218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.163479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.163545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.163814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.163878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.164119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.164183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.164453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.164517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.164745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.164808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.165018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.165082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.165310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.165376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.165598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.165666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.165965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.166039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.166330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.166399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.166693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.166758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.167052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.167116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.797 [2024-12-09 04:16:36.167366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.797 [2024-12-09 04:16:36.167432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.797 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.167715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.167780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.168032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.168097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.168317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.168383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.168618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.168682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.168876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.168939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.169164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.169232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.169460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.169495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.169660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.169695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.169832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.169866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.170139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.170204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.170405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.170440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.170539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.170573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.170740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.170774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.170894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.170928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.171078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.171114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.171287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.171324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.171439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.171474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.171677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.171742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.172042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.172105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.172370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.172405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.172531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.172571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.172849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.172915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.173212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.173278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.173441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.173475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.173639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.173674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.173837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.173901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.174190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.174256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.174479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.174514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.174674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.174740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.175028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.175093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.175327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.175361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.175504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.175537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.175696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.175731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.175897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.175931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.176095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.176129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.176282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.176317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.176494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.176528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.176750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.176813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.177063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.177128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.177332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.177366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.177482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.177516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.177684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.177718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.177866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.798 [2024-12-09 04:16:36.177903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.798 qpair failed and we were unable to recover it. 00:26:07.798 [2024-12-09 04:16:36.178012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.178046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.178188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.178222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.178431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.178465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.178593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.178626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.178743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.178801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.178989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.179057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.179284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.179324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.179430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.179466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.179569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.179603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.179757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.179792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.179934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.179968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.180070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.180105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.180215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.180250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.180406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.180438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.180550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.180594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.180726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.180759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.180886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.180918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.181099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.181164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.181409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.181443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.181545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.181583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.181703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.181736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.181887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.181919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.182103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.182168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.182392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.182427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.182572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.182606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.182803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.182867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.183053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.183116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.183357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.183392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.183577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.183653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.183950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.184013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.184250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.184302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.184472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.184506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.184712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.184775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.185049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.185113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.185362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.185396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.185540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.185611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.185856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.185889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.185995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.186053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.799 qpair failed and we were unable to recover it. 00:26:07.799 [2024-12-09 04:16:36.186290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.799 [2024-12-09 04:16:36.186325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.186469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.186508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.186686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.186751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.187008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.187072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.187292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.187327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.187497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.187530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.187666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.187701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.187835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.187870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.188041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.188076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.188239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.188284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.188416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.188467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.188673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.188734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.188903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.188963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.189159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.189215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.189333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.189367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.189552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.189629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.189766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.189836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.190005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.190060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.190202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.190236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.190522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.190597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.190848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.190904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.191046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.191086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.191181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.191214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.191418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.191475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.191602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.191679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.191854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.191908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.192045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.192078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.192245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.192291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.192451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.192505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.192678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.192736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.192927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.192980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.193124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.193159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.193267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.193311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.193428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.193463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.193582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.193616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.193751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.193785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.193929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.193964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.194094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.194127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.194246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.194293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.194403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.194442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.194583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.194617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.194748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.194782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.194915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.194948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.195061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.800 [2024-12-09 04:16:36.195094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.800 qpair failed and we were unable to recover it. 00:26:07.800 [2024-12-09 04:16:36.195205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.195239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.195430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.195465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.195613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.195646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.195774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.195807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.195913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.195947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.196046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.196081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.196246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.196292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.196409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.196443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.196594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.196627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.196821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.196855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.197032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.197072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.197213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.197246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.197388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.197422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.197586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.197638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.197804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.197838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.197983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.198016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.198155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.198188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.198320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.198355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.198476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.198510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.198657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.198690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.198858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.198892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.199028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.199061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.199201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.199235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.199404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.199440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.199542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.199579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.199707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.199740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.199873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.199907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.200052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.200086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.200194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.200227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.200348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.200382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.200508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.200543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.200697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.200730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.200838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.200872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.201010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.201046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.201187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.201221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.201343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.201377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.201479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.201513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.201692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.201729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.201826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.201860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.202002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.202036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.202194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.202228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.202357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.202391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.202505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.202540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.202689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.202723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.801 qpair failed and we were unable to recover it. 00:26:07.801 [2024-12-09 04:16:36.202820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.801 [2024-12-09 04:16:36.202854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.203044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.203114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.203258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.203326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.203478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.203516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.203630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.203665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.203833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.203868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.203996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.204039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.204155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.204198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.204325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.204360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.204465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.204500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.204626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.204660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.204768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.204802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.204946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.204981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.205099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.205136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.205279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.205314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.205421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.205455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.205601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.205644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.205763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.205797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.205905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.205939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.206084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.206117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.206231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.206283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.206390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.206425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.206564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.206599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.206745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.206779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.206889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.206923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.207061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.207096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.207252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.207317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.207447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.207491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.207630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.207682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.207835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.207871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.207984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.208018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.208125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.208160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.208280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.208317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.208428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.208462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.208569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.208608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.208736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.208769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.208915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.208948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.209062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.209095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.209225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.209259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.209389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.209423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.209534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.209568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.209701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.209734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.209861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.209894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.209999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.210034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.210136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.210173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.210317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.210352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.210468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.802 [2024-12-09 04:16:36.210511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.802 qpair failed and we were unable to recover it. 00:26:07.802 [2024-12-09 04:16:36.210635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.210673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.210827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.210862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.211012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.211047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.211197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.211232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.211357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.211398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.211503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.211537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.211714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.211756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.211876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.211911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.212043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.212107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.212270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.212330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.212465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.212501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.212620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.212667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.212799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.212843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.212957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.212992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.213136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.213179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.213292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.213328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.213442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.213475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.213625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.213658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.213804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.213838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.213938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.213973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.214105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.214139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.214303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.214338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.214465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.214498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.214637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.214671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.214813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.214847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.214961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.214994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.215128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.215162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.215340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.215375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.215482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.215516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.215630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.215665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.215815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.215848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.215963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.215996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.216118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.216151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.216257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.216309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.216427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.216461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.216600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.216638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.216781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.216817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.216963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.216998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.803 [2024-12-09 04:16:36.217124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.803 [2024-12-09 04:16:36.217159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.803 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.217296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.217333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.217454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.217488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.217626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.217668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.217810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.217844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.217972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.218006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.218133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.218171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.218314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.218365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.218490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.218527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.218639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.218674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.218772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.218805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.218950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.218985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.219099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.219135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.219250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.219302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.219420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.219454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.219570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.219604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.219739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.219783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.219883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.219918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.220051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.220085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.220193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.220227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.220343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.220378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.220486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.220520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.220655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.220688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.220815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.220850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.221014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.221057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.221200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.221236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.221372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.221407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.221526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.221560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.221698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.221732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.221845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.221884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.221996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.222033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.222175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.222212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.222328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.222363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.222476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.222511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.222639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.222673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.222815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.222849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.223019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.223053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.223194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.223240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.223374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.223409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.223528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.223563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.223672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.223707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.223884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.223919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.224033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.804 [2024-12-09 04:16:36.224068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.804 qpair failed and we were unable to recover it. 00:26:07.804 [2024-12-09 04:16:36.224209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.224244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.224377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.224411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.224521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.224555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.224724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.224758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.224863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.224897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.225025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.225059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.225185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.225219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.225352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.225388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.225512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.225546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.225738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.225773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.225906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.225941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.226078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.226113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.226288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.226339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.226449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.226482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.226579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.226613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.226745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.226778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.226911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.226944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.227047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.227096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.227191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.227222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.227336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.227369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.227492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.227527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.227658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.227697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.227866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.227900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.228039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.228074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.228179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.228212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.228326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.228359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.228474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.228507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.228642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.228676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.228853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.228895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.229022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.229054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.229183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.229216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.229353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.229387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.229493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.229526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.229691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.229733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.229875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.229909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.230032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.230067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.230946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.231004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.231156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.231185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.231315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.231343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.231445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.231474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.231625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.231654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.231735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.231764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.231883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.231912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.231991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.805 [2024-12-09 04:16:36.232019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.805 qpair failed and we were unable to recover it. 00:26:07.805 [2024-12-09 04:16:36.232129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.232157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.232294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.232323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.232422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.232450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.232596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.232624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.232753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.232784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.232909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.232937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.233031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.233058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.233201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.233242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.233421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.233469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.233643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.233704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.233855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.233905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.234071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.234105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.234234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.234300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.234435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.234469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.234606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.234640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.234793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.234827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.234998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.235047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.235166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.235194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.235312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.235340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.235460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.235509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.235649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.235698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.235836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.235882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.236002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.236042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.236128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.236156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.236301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.236330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.236432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.236466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.236568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.236598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.236720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.236748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.236898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.236930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.237075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.237109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.237190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.237217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.237320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.237353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.237447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.237477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.237599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.237628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.237741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.237798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.237924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.237952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.238111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.238139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.238256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.238300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.238389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.238416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.238505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.238533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.238652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.238680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.238784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.238812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.238896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.238924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.239004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.239033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.806 [2024-12-09 04:16:36.239161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.806 [2024-12-09 04:16:36.239189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.806 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.239296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.239325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.239422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.239459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.239560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.239591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.239735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.239764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.239864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.239893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.239984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.240028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.240126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.240161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.240256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.240291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.240445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.240480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.240573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.240607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.240699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.240740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.240884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.240919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.241045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.241086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.241233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.241278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.241373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.241402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.241492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.241520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.241687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.241721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.241859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.241899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.242035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.242073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.242194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.242225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.242331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.242360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.242449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.242477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.242612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.242660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.242804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.242851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.242995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.243045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.243129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.243157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.243283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.243312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.243439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.243468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.243549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.243588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.243691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.243725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.243838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.243866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.244013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.244041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.244125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.244154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.244255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.244295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.244416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.244444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.244587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.244615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.244711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.244739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.244858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.244885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.245008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.807 [2024-12-09 04:16:36.245036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.807 qpair failed and we were unable to recover it. 00:26:07.807 [2024-12-09 04:16:36.245151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.245180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.245317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.245349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.245467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.245495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.245622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.245650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.245739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.245775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.245856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.245885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.246021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.246050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.246142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.246175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.246307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.246336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.246432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.246460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.246583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.246617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.246718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.246752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.246894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.246929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.247043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.247073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.247163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.247192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.247292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.247320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.247422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.247456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.247617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.247669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.247811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.247858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.247981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.248008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.248125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.248153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.248239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.248266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.248361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.248389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.248514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.248542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.248656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.248683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.248830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.248857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.248983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.249011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.249132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.249159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.249252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.249302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.249399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.249428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.249519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.249548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.249656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.249701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.249815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.249844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.249935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.249963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.250056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.250087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.250207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.250250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.250375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.250406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.250515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.250543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.250668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.250697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.250893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.250929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.251047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.251102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.251225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.251254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.251363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.251393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.251497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.251525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.251754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.808 [2024-12-09 04:16:36.251788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.808 qpair failed and we were unable to recover it. 00:26:07.808 [2024-12-09 04:16:36.251905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.251940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.252058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.252091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.252244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.252287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.252385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.252413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.252504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.252532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.252686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.252715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.252843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.252897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.253077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.253109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.253255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.253300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.253395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.253430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.253520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.253551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.253726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.253776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.253905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.253967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.254072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.254106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.254250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.254294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.254401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.254430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.254527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.254573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.254722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.254752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.254846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.254895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.255007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.255055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.255192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.255226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.255370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.255399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.255487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.255515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.255680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.255714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.255829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.255861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.256052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.256087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.256263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.256298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.256393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.256421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.256512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.256540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.256640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.256669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.256801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.256830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.256951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.257001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.257183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.257217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.257353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.257382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.257475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.257503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.257670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.257704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.257840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.257885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.258033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.258078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.258204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.258232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.258339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.258369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.258464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.258492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.258617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.258654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.258764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.809 [2024-12-09 04:16:36.258793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.809 qpair failed and we were unable to recover it. 00:26:07.809 [2024-12-09 04:16:36.258886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.258915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.259096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.259130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.259299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.259347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.259463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.259491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.259588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.259624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.259837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.259892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.260084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.260118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.260306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.260353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.260453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.260488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.260627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.260656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.260758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.260805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.260964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.260998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.261100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.261135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.261302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.261348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.261477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.261506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.261620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.261649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.261763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.261795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.261929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.261968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.262063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.262096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.262184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.262230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.262343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.262372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.263215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.263261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.263427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.263458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.264479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.264513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.264674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.264704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.265474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.265508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.265649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.265680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.265798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.265828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.265926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.265959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.266083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.266112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.266206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.266235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.266335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.266365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.266454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.266484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.266610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.266639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.266769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.266808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.266902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.266931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.267094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.267123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.267279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.267313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.267406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.267435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.267556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.267592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.267696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.267724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.267826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.267862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.268012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.268047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.268129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.810 [2024-12-09 04:16:36.268169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.810 qpair failed and we were unable to recover it. 00:26:07.810 [2024-12-09 04:16:36.268304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.268333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.268440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.268481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.268608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.268642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.268760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.268794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.268915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.268944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.269059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.269087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.269178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.269207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.269304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.269334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.269457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.269486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.269615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.269654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.269775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.269805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.269934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.269963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.270115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.270143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.270266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.270307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.270402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.270430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.270549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.270584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.270680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.270709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.270812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.270840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.270924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.270952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.271067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.271095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.271180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.271210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.271311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.271340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.271438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.271466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.271600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.271627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.271717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.271744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.271869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.271900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.272018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.272047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.272160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.272202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.272314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.272345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.272437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.272466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.272560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.272598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.272766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.272816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.272929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.272958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.273088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.273115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.273209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.273237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.273345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.273374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.273484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.273513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.273675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.273703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.273793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.273827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.273943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.811 [2024-12-09 04:16:36.273974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.811 qpair failed and we were unable to recover it. 00:26:07.811 [2024-12-09 04:16:36.274090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.274119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.274212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.274240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.274378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.274406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.274507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.274535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.274657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.274685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.274801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.274830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.274914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.274943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.275031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.275060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.275208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.275236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.275354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.275397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.275522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.275551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.275683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.275713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.275834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.275863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.275962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.275990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.276111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.276139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.276265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.276317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.276422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.276452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.276544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.276577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.276701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.276729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.276865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.276911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.277024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.277053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.277187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.277215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.277323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.277354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.277448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.277477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.277555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.277592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.277723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.277752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.277869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.277897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.277987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.278015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.278137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.278165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.278296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.278325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.278448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.278481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.278637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.278665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.278784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.278813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.278937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.278965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.279058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.279086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.279202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.279231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.279337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.279367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.279457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.279485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.279586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.279615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.279703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.279731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.279844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.279881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.280029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.280059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.280149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.280179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.280299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.280328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.280427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.280455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.280555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.812 [2024-12-09 04:16:36.280594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.812 qpair failed and we were unable to recover it. 00:26:07.812 [2024-12-09 04:16:36.280716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.280743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.280890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.280918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.281027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.281056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.281180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.281212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.281328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.281356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.281446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.281475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.281610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.281638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.281751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.281780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.281928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.281957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.282104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.282132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.282251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.282286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.282385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.282414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.282515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.282543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.282695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.282724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.282813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.282841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.282932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.282960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.283045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.283074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.283195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.283238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.283363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.283394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.283485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.283514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.283630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.283658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.283752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.283782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.283882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.283911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.284010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.284041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.284163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.284196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.284306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.284335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.284433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.284461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.284552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.284587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.284662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.284690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.284789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.284818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.284911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.284940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.285049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.285077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.285223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.285251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.285355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.285384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.285474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.285502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.285644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.285673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.285785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.285814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.285934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.285964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.286117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.286146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.286284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.286328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.286421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.286450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.813 [2024-12-09 04:16:36.286544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.813 [2024-12-09 04:16:36.286586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.813 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.286708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.286736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.286868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.286913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.287037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.287082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.287240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.287293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.287397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.287427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.287527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.287556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.287650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.287678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.287788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.287818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.287942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.287971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.288102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.288147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.288282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.288312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.288414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.288445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.288543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.288583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.288740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.288788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.288936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.288965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.289089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.289118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.289214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.289243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.289347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.289376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.289457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.289486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.289640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.289669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.289789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.289819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.289909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.289938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.290027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.290061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.290165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.290208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.290331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.290362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.290459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.290487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.290615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.290648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.290832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.290877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.291091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.291144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.291296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.291328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.291412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.291440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.291531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.291560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.291684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.291732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.291861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.291908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.291994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.292022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.292168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.292197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.292318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.292360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.292450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.292480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.292614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.292653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.292770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.292797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.292917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.292945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.293067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.293097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.293219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.293249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.814 [2024-12-09 04:16:36.293381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.814 [2024-12-09 04:16:36.293410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.814 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.293502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.293532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.293666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.293712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.293876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.293921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.294036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.294064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.294178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.294206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.294303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.294337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.294430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.294458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.294576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.294627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.294744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.294774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.294861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.294889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.295004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.295033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.295154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.295182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.295313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.295344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.295442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.295470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.295596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.295624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.295742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.295771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.295912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.295939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.296036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.296064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.296174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.296202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.296306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.296336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.296438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.296467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.296564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.296602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.296720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.296749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.296869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.296897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.297022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.297050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.297178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.297208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.297308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.297337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.297461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.297490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.297588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.297618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.297763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.297791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.297917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.297959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.298113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.298143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.298261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.298297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.298394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.298422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.298512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.298542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.298657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.815 [2024-12-09 04:16:36.298701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.815 qpair failed and we were unable to recover it. 00:26:07.815 [2024-12-09 04:16:36.298848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.298897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.299014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.299042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.299134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.299162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.299286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.299315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.299406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.299435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.299529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.299557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.299656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.299684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.299830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.299858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.299948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.299977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.300126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.300154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.300297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.300327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.300445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.300474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.300566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.300601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.300691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.300720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.300863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.300891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.301008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.301036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.301161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.301193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.301337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.301380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.301483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.301516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.301644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.301673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.301801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.301829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.301950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.301985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.302108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.302136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.302300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.302329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.302443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.302472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.302595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.302623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.302703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.302731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.302935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.302985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.303124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.303173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.303305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.303345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.303436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.303464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.303549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.303587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.303732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.303760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.303836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.303864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.303950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.303978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.304138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.304180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.304299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.304335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.304436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.304465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.304584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.304634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.304777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.304806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.304966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.305013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.305171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.305201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.305294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.305323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.305410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.305438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.816 [2024-12-09 04:16:36.305523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.816 [2024-12-09 04:16:36.305551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.816 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.305712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.305755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.305918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.305962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.306076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.306104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.306227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.306259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.306381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.306411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.306510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.306557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.306672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.306716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.306891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.306940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.307122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.307172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.307290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.307333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.307458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.307488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.307613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.307641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.307860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.307893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.308048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.308096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.308218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.308246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.308385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.308416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.308554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.308592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.308711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.308745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.308846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.308880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.309052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.309100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.309189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.309217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.309343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.309373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.309461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.309491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.309636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.309664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.309788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.309815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.309946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.309988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.310119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.310150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.310268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.310315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.310401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.310430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.310596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.310641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.310815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.310862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.310996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.311044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.311199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.311228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.311332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.311362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.311484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.311513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.311664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.311707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.311882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.311934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.312065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.312099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.312260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.312320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.312445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.312473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.312565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.312592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.312711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.312738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.312825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.312854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.312941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.817 [2024-12-09 04:16:36.312971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.817 qpair failed and we were unable to recover it. 00:26:07.817 [2024-12-09 04:16:36.313132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.313175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.313302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.313345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.313470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.313499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.313631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.313661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.313757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.313786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.313885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.313915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.314015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.314043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.314184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.314227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.314344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.314376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.314501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.314532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.314684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.314729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.314844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.314872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.314973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.315001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.315089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.315117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.315245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.315283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.315389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.315418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.315500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.315529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.315649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.315677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.315798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.315826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.315965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.315993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.316119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.316147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.316252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.316325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.316416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.316447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.316569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.316598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.316716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.316744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.316841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.316884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.316988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.317017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.317144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.317172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.317305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.317335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.317451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.317479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.317572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.317600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.317686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.317714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.317829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.317857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.317954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.317984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.318094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.318136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.318278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.318309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.318438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.318468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.318616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.318644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.318762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.318790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.318949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.318978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.319067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.319097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.319217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.319250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.319384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.319412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.818 [2024-12-09 04:16:36.319502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.818 [2024-12-09 04:16:36.319530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.818 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.319617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.319645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.319764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.319792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.319874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.319903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.319988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.320016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.320112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.320142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.320258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.320304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.320423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.320452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.320570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.320598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.320723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.320751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.320881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.320910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.321025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.321053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.321159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.321188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.321306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.321335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.321420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.321449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.321594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.321622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.321766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.321794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.321883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.321912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.322008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.322037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.322181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.322209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.322340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.322369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.322486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.322514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.322637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.322665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.322783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.322811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.322924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.322952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.323038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.323071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.323181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.323209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.323317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.323360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.323448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.323478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.323573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.323602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.323717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.323746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.323865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.323893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.323982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.324010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.324158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.324186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.324267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.324300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.324391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.324420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.324519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.324548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.819 [2024-12-09 04:16:36.324660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.819 [2024-12-09 04:16:36.324687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.819 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.324775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.324803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.324897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.324926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.325047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.325074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.325195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.325223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.325355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.325386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.325490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.325533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.325687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.325717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.325838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.325867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.325994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.326022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.326135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.326163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.326287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.326317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.326417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.326445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.326538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.326576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.326719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.326747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.326831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.326863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.326973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.327001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.327095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.327125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.327247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.327286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.327375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.327405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.327520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.327551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.327711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.327760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.327852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.327881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.327974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.328002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.328123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.328152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.328243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.328280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.328430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.328459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.328609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.328638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.328756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.328784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.328914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.328942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.329086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.329127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.329285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.329316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.329412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.329443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.329564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.329592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.329679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.329707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.329828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.329856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.329946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.329976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.330111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.330154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.330267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.330304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.330422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.330450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.330530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.330568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.330689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.330716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.330818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.330848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.330969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.330999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.331118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.331147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.331261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.331296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.820 qpair failed and we were unable to recover it. 00:26:07.820 [2024-12-09 04:16:36.331413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.820 [2024-12-09 04:16:36.331442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.331562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.331591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.331813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.331872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.332041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.332088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.332209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.332237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.332378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.332409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.332506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.332534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.332632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.332660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.332750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.332777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.332901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.332933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.333048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.333076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.333168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.333198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.333299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.333329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.333413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.333441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.333534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.333574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.333701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.333730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.333855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.333883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.334032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.334062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.334192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.334234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.334347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.334379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.334477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.334510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.334630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.334676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.334774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.334802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.334907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.334935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.335034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.335063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.335165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.335194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.335293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.335325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.335437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.335479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.335657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.335699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.335798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.335828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.335924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.335952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.336045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.336073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.336198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.336228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.336344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.336373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:07.821 [2024-12-09 04:16:36.336478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.821 [2024-12-09 04:16:36.336506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:07.821 qpair failed and we were unable to recover it. 00:26:08.097 [2024-12-09 04:16:36.336717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.097 [2024-12-09 04:16:36.336753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.097 qpair failed and we were unable to recover it. 00:26:08.097 [2024-12-09 04:16:36.336916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.097 [2024-12-09 04:16:36.336959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.097 qpair failed and we were unable to recover it. 00:26:08.097 [2024-12-09 04:16:36.337099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.097 [2024-12-09 04:16:36.337135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.097 qpair failed and we were unable to recover it. 00:26:08.097 [2024-12-09 04:16:36.337353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.097 [2024-12-09 04:16:36.337383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.337479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.337506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.337591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.337618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.337798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.337831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.337955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.338003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.338145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.338179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.338300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.338327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.338415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.338442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.338532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.338559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.338697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.338731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.338842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.338869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.339023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.339056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.339204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.339247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.339370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.339413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.339513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.339543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.339661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.339690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.339794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.339845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.339987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.340035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.340156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.340184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.340301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.340331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.340415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.340443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.340558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.340587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.340685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.340713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.340841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.340869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.340996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.341024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.341162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.341191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.341294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.341329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.341425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.341455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.341579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.341607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.341754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.098 [2024-12-09 04:16:36.341782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.098 qpair failed and we were unable to recover it. 00:26:08.098 [2024-12-09 04:16:36.341864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.341891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.342011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.342040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.342186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.342217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.342330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.342373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.342476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.342519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.342617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.342647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.342823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.342875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.343086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.343142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.343254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.343299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.343415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.343443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.343595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.343622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.343762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.343808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.343939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.343987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.344097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.344125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.344220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.344249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.344398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.344441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.344585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.344627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.344733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.344763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.344851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.344879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.344992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.345020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.345130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.345173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.345336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.345366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.345465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.345493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.345643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.345670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.345791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.345820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.345957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.346005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.346153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.346181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.346280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.346309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.346455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.346483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.346642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.346672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.346797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.346825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.346947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.099 [2024-12-09 04:16:36.346976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.099 qpair failed and we were unable to recover it. 00:26:08.099 [2024-12-09 04:16:36.347122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.347149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.347233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.347262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.347369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.347397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.347481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.347514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.347602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.347630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.347742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.347770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.347854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.347883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.348024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.348068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.348164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.348207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.348361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.348392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.348487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.348516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.348660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.348705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.348845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.348891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.349028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.349073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.349216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.349245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.349365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.349396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.349521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.349567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.349727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.349774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.349879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.349913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.350080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.350109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.350240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.350267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.350373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.350401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.350511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.350539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.350699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.350726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.350839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.350867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.351028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.351057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.351218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.351246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.351388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.351419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.351542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.351570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.351710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.351738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.351823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.351853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.351994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.100 [2024-12-09 04:16:36.352036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.100 qpair failed and we were unable to recover it. 00:26:08.100 [2024-12-09 04:16:36.352219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.352268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.352409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.352438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.352551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.352579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.352665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.352693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.352813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.352844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.352962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.352991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.353105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.353133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.353252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.353289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.353376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.353405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.353529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.353558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.353680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.353709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.353824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.353861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.353984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.354013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.354164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.354192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.354305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.354334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.354424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.354453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.354540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.354569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.354719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.354747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.354840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.354869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.354960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.354988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.355105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.355133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.355252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.355292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.355412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.355442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.355589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.355617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.355706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.355734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.355863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.355892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.356019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.356049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.356196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.356225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.356382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.356412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.356538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.356585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.101 [2024-12-09 04:16:36.356701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.101 [2024-12-09 04:16:36.356730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.101 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.356856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.356884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.357002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.357031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.357148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.357177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.357305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.357334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.357451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.357480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.357591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.357619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.357734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.357763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.357931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.357961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.358066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.358095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.358183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.358212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.358333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.358363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.358449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.358478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.358603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.358632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.358774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.358803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.358954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.358983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.359112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.359140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.359254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.359289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.359377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.359406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.359566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.359609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.359739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.359792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.360124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.360204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.360384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.360413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.360514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.360542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.360632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.360662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.360820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.360848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.361059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.361094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.361225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.361254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.361376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.361405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.361524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.361552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.361672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.361701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.102 [2024-12-09 04:16:36.361815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.102 [2024-12-09 04:16:36.361843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.102 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.361977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.362020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.362155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.362185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.362304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.362334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.362429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.362456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.362609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.362637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.362733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.362760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.362895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.362945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.363094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.363144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.363254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.363290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.363389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.363419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.363520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.363549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.363702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.363731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.363852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.363880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.364001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.364029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.364167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.364210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.364368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.364399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.364509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.364551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.364694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.364754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.364971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.365036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.365260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.365300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.365438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.365466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.365586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.365630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.365841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.365875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.365982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.366030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.366139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.103 [2024-12-09 04:16:36.366173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.103 qpair failed and we were unable to recover it. 00:26:08.103 [2024-12-09 04:16:36.366313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.366358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.366447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.366475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.366615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.366648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.366873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.366937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.367199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.367232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.367397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.367425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.367520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.367564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.367705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.367749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.367974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.368008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.368168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.368202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.368383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.368427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.368535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.368567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.368677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.368707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.368812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.368841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.368939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.368968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.369109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.369139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.369285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.369314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.369460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.369490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.369630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.369672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.369793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.369822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.369915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.369944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.370063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.370091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.370190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.370218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.370311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.370340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.370487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.370517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.370603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.370632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.370758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.370787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.370963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.370993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.371115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.371157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.371298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.371326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.371429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.371459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.104 [2024-12-09 04:16:36.371590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.104 [2024-12-09 04:16:36.371628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.104 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.371823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.371878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.371973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.372002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.372117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.372161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.372240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.372268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.372370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.372398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.372519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.372549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.372689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.372719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.372825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.372870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.372971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.373000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.373162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.373191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.373293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.373322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.373437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.373465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.373628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.373657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.373778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.373807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.373907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.373935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.374063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.374091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.374209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.374236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.374328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.374357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.374449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.374477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.374567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.374596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.374688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.374716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.374815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.374844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.374994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.375022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.375137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.375165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.375263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.375299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.375399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.375428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.375547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.375589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.375700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.375730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.375918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.375947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.376062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.376091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.376185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.105 [2024-12-09 04:16:36.376213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.105 qpair failed and we were unable to recover it. 00:26:08.105 [2024-12-09 04:16:36.376327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.376355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.376499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.376527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.376643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.376677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.376867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.376901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.377069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.377144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.377364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.377393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.377541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.377568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.377736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.377770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.377912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.377959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.378106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.378159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.378353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.378381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.378528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.378556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.378769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.378802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.378937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.378986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.379130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.379164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.379332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.379395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.379529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.379561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.379723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.379753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.379848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.379877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.380024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.380070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.380309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.380340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.380468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.380497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.380601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.380645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.380807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.380860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.381028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.381064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.381234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.381302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.381453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.381482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.381632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.381661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.381787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.381816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.381973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.382037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.382307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.106 [2024-12-09 04:16:36.382358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.106 qpair failed and we were unable to recover it. 00:26:08.106 [2024-12-09 04:16:36.382473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.382502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.382656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.382686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.382816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.382845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.383035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.383069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.383216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.383250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.383411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.383440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.383535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.383564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.383753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.383787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.383944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.383978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.384142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.384176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.384325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.384354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.384502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.384531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.384731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.384793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.385152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.385185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.385366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.385395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.385542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.385591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.385708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.385742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.385960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.386024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.386257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.386314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.386456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.386485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.386613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.386642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.386739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.386768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.387015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.387079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.387290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.387319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.387414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.387444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.387537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.387599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.387862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.387925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.388218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.388321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.388460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.388489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.388635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.388664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.388812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.107 [2024-12-09 04:16:36.388846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.107 qpair failed and we were unable to recover it. 00:26:08.107 [2024-12-09 04:16:36.389065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.389131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.389353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.389383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.389477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.389506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.389622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.389682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.389974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.390037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.390287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.390337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.390432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.390461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.390588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.390622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.390816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.390880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.391102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.391136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.391285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.391315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.391432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.391461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.391590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.391619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.391871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.391939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.392179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.392232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.392379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.392409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.392497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.392526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.392617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.392666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.392878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.392913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.393080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.393149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.393412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.393442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.393573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.393652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.393899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.393964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.394230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.394264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.394384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.394414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.394537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.394567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.394665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.394695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.394901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.394929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.395069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.395120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.395264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.108 [2024-12-09 04:16:36.395299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.108 qpair failed and we were unable to recover it. 00:26:08.108 [2024-12-09 04:16:36.395503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.395532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.395761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.395790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.395915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.395945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.396074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.396103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.396218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.396253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.396414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.396444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.396584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.396629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.396780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.396809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.396888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.396917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.397060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.397093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.397256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.397325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.397598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.397638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.397782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.397815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.397956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.397989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.398200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.398264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.398450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.398484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.398616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.398649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.398794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.398828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.398969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.399002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.399118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.399151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.399307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.399341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.399469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.399550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.399803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.399868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.400164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.400198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.400338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.400372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.400549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.400629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.400869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.400944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.401233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.109 [2024-12-09 04:16:36.401310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.109 qpair failed and we were unable to recover it. 00:26:08.109 [2024-12-09 04:16:36.401609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.401684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.401914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.401977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.402178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.402246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.402608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.402643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.402813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.402846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.403037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.403101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.403352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.403418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.403716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.403789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.404026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.404089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.404344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.404410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.404708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.404770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.405036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.405100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.405351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.405417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.405710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.405774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.406028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.406092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.406349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.406413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.406668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.406732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.406954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.407018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.407243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.407500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.407729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.407797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.408047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.408114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.408412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.408447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.408581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.408616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.408759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.408793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.409099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.409163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.409439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.409505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.409805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.409869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.410158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.410221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.410529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.410603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.410859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.110 [2024-12-09 04:16:36.410923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.110 qpair failed and we were unable to recover it. 00:26:08.110 [2024-12-09 04:16:36.411171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.411235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.411502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.411569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.411822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.411890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.412108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.412173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.412384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.412450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.412692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.412758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.413006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.413070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.413327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.413394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.413698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.413761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.413990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.414053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.414314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.414379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.414677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.414750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.415035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.415098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.415397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.415472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.415717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.415781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.416020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.416084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.416331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.416384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.416501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.416536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.416753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.416818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.417004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.417073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.417294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.417362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.417631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.417705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.417949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.418012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.418191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.418255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.418534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.418598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.418884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.418948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.419243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.419325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.419581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.419645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.419883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.419948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.420239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.420328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.420561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.111 [2024-12-09 04:16:36.420596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.111 qpair failed and we were unable to recover it. 00:26:08.111 [2024-12-09 04:16:36.420712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.420746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.420847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.420881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.421027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.421063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.421321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.421385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.421690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.421754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.422054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.422120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.422339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.422402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.422610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.422674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.422963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.423028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.423327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.423393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.423620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.423684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.423950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.423983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.424121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.424155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.424323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.424357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.424646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.424708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.425001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.425034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.425148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.425184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.425441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.425480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.425596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.425630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.425766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.425800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.425937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.425970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.426214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.426290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.426501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.426565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.426850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.426913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.427167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.427231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.427501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.427534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.427671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.427706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.427909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.427974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.428148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.428211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.428517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.428593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.428893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.112 [2024-12-09 04:16:36.428957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.112 qpair failed and we were unable to recover it. 00:26:08.112 [2024-12-09 04:16:36.429255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.429335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.429572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.429606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.429747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.429781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.429886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.429919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.430135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.430214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.430516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.430592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.430839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.430902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.431092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.431156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.431387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.431422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.431587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.431620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.431833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.431896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.432071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.432135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.432389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.432455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.432753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.432816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.433124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.433188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.433468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.433533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.433776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.433842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.434110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.434144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.434239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.434279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.434490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.434524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.434669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.434703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.434869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.434932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.435184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.435248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.435476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.435542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.435797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.435863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.436162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.436196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.436340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.436374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.436518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.436552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.436774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.436838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.437070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.437104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.437218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.437251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.437437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.113 [2024-12-09 04:16:36.437501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.113 qpair failed and we were unable to recover it. 00:26:08.113 [2024-12-09 04:16:36.437787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.437851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.438062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.438127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.438318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.438383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.438615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.438679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.438917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.438980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.439310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.439374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.439623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.439687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.439966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.439999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.440138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.440188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.440520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.440590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.440797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.440860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.441121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.441155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.441297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.441331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.441451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.441484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.441621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.441656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.441898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.441954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.442160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.442233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.442504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.442568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.442765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.442828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.443045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.443109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.443333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.443396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.443565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.443598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.443799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.443878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.444144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.444178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.114 qpair failed and we were unable to recover it. 00:26:08.114 [2024-12-09 04:16:36.444320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.114 [2024-12-09 04:16:36.444354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.444492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.444527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.444720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.444784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.445065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.445128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.445366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.445432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.445722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.445786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.446082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.446115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.446255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.446295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.446479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.446544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.446774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.446807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.446955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.446988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.447224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.447322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.447627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.447690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.447969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.448033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.448293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.448359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.448617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.448651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.448827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.448906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.449168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.449232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.449502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.449566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.449844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.449907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.450152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.450216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.450458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.450524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.450747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.450782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.450912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.450945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.451060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.451094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.451269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.451361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.451655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.451719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.452028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.452092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.452355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.452420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.452714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.452783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.453021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.115 [2024-12-09 04:16:36.453085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.115 qpair failed and we were unable to recover it. 00:26:08.115 [2024-12-09 04:16:36.453332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.453397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.453672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.453735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.454019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.454082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.454382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.454447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.454715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.454748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.454890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.454924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.455051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.455085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.455199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.455234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.455498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.455563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.455815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.455885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.456074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.456138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.456352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.456417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.456646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.456710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.456999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.457062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.457348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.457413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.457702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.457777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.458051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.458114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.458337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.458402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.458662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.458726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.459003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.459037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.459142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.459176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.459390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.459465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.459718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.459782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.460021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.460094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.460335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.460399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.460689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.460757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.461029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.461074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.461207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.461241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.461526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.461590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.461885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.461950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.462244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.116 [2024-12-09 04:16:36.462330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.116 qpair failed and we were unable to recover it. 00:26:08.116 [2024-12-09 04:16:36.462616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.462650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.462785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.462820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.462934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.462968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.463095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.463134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.463417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.463483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.463677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.463711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.463828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.463861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.464104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.464174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.464436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.464501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.464738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.464803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.465101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.465167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.465378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.465442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.465691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.465755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.466038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.466103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.466386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.466421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.466552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.466586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.466731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.466770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.466873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.466906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.467171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.467235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.467417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.467481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.467699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.467761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.468024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.468088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.468307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.468372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.468659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.468723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.469007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.469041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.469147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.469182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.469397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.469432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.469532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.469566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.469676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.469709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.469960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.470023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.470319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.470386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.470649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.117 [2024-12-09 04:16:36.470702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.117 qpair failed and we were unable to recover it. 00:26:08.117 [2024-12-09 04:16:36.470856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.470893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.471216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.471323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.471585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.471621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.471812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.471883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.472185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.472260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.472611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.472685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.472973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.473047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.473310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.473390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.473662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.473737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.473992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.474070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.474374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.474464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.474711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.474792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.475100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.475193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.475497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.475537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.475667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.475703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.475942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.476020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.476285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.476361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.476640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.476708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.477004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.477039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.477184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.477222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.477513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.477591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.477905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.477990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.478259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.478343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.478649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.478715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.479016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.479081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.479347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.479383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.479506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.479542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.479728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.479805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.480072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.480108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.480258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.480336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.480596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.480669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.118 [2024-12-09 04:16:36.480978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.118 [2024-12-09 04:16:36.481044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.118 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.481318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.481390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.481659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.481733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.481986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.482043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.482191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.482227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.482389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.482453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.482774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.482852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.483149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.483216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.483482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.483565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.483827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.483901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.484131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.484203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.484529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.484616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.484894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.484962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.485188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.485253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.485583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.485651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.485958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.486026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.486334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.486402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.486657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.486724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.486998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.487032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.487237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.487326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.487558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.487633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.487899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.487976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.488239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.488321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.488604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.488687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.488984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.489057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.489336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.489371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.489489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.489540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.489707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.489742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.489847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.489882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.490063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.490099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.490404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.490490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.119 [2024-12-09 04:16:36.490755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.119 [2024-12-09 04:16:36.490837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.119 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.491146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.491212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.491491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.491557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.491876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.491943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.492225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.492318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.492585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.492659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.492933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.493001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.493332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.493411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.493700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.493771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.494004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.494070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.494343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.494422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.494694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.494734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.494952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.495020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.495343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.495422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.495699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.495765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.496092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.496171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.496451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.496497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.496596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.496636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.496789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.496828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.497139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.497211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.497527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.497604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.497892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.497958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.498265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.498348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.498638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.498705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.499064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.499154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.499427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.120 [2024-12-09 04:16:36.499494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.120 qpair failed and we were unable to recover it. 00:26:08.120 [2024-12-09 04:16:36.499799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.499864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.500114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.500208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.500500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.500580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.500859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.500926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.501151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.501245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.501575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.501644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.501861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.501927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.502186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.502233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.502422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.502484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.502753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.502827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.503131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.503208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.503543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.503611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.503803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.503871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.504091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.504157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.504405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.504441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.504586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.504621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.504885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.504952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.505170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.505235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.505543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.505578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.505731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.505766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.505971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.506048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.506346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.506435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.506735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.506804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.507016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.507084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.507371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.507440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.507765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.507842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.508122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.508178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.508330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.508366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.508544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.508626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.508892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.121 [2024-12-09 04:16:36.508957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.121 qpair failed and we were unable to recover it. 00:26:08.121 [2024-12-09 04:16:36.509294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.509362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.509624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.509701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.510007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.510086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.510339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.510406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.510661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.510696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.510872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.510965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.511188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.511262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.511606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.511641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.511756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.511792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.512030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.512105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.512376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.512412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.512615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.512700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.512952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.513030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.513349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.513432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.513688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.513762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.514039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.514114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.514322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.514396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.514637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.514711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.514962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.515028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.515321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.515388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.515649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.515715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.515972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.516042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.516333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.516400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.516660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.516726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.517016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.517081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.517329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.517399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.517661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.517727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.518032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.518107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.518407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.518475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.518771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.518838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.519091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.122 [2024-12-09 04:16:36.519157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.122 qpair failed and we were unable to recover it. 00:26:08.122 [2024-12-09 04:16:36.519484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.519551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.519800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.519868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.520125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.520189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.520493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.520559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.520748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.520814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.521026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.521092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.521391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.521469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.521765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.521800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.521932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.521966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.522193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.522262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.522569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.522662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.522969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.523030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.523306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.523340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.523527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.523589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.523878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.523940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.524138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.524171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.524318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.524351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.524553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.524625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.524927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.525002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.525310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.525376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.525637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.525710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.525978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.526044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.526354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.526421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.526724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.526790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.527098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.527164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.527473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.527539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.527788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.527854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.528143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.528210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.528519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.528590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.528840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.528910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.529160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.123 [2024-12-09 04:16:36.529195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.123 qpair failed and we were unable to recover it. 00:26:08.123 [2024-12-09 04:16:36.529345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.529381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.529642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.529707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.530002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.530067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.530357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.530425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.530716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.530781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.531075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.531139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.531421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.531488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.531783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.531848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.532080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.532114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.532287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.532342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.532568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.532634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.532820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.532887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.533147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.533212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.533530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.533596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.533900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.533964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.534258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.534341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.534631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.534695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.534950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.535017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.535322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.535390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.535586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.535661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.535948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.536014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.536263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.536344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.536636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.536700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.536999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.537063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.537351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.537387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.537524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.537558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.537694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.537728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.537974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.538040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.538249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.538349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.538608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.538676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.124 [2024-12-09 04:16:36.538929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.124 [2024-12-09 04:16:36.538995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.124 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.539247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.539335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.539575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.539641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.539885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.539950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.540236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.540319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.540582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.540648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.540858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.540923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.541177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.541243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.541525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.541597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.541806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.541873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.542133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.542198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.542538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.542606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.542854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.542918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.543177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.543242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.543608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.543682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.543926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.543991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.544289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.544357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.544600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.544667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.544896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.544960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.545262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.545353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.545556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.545624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.545877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.545946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.546198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.546265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.546570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.546635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.546851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.546918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.547149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.547215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.547485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.547554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.547808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.547873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.548160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.548226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.125 [2024-12-09 04:16:36.548509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.125 [2024-12-09 04:16:36.548585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.125 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.548836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.548904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.549207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.549298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.549560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.549626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.549885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.549951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.550166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.550200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.550419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.550486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.550706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.550775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.551043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.551078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.551185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.551220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.551428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.551497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.551706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.551773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.552031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.552097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.552344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.552412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.552669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.552735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.552986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.553054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.553285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.553352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.553618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.553683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.553907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.553941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.554087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.554121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.554408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.554485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.554735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.554801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.555086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.555150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.555438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.555504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.555794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.555859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.556147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.556212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.126 [2024-12-09 04:16:36.556441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.126 [2024-12-09 04:16:36.556519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.126 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.556782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.556850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.557052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.557120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.557418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.557496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.557791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.557856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.558157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.558231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.558533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.558599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.558896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.558971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.559228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.559312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.559573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.559638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.559886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.559951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.560183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.560247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.560476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.560541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.560793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.560858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.561087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.561161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.561437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.561504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.561747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.561813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.562068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.562132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.562437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.562505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.562768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.562835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.563137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.563213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.563484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.563552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.563849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.563925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.564172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.564238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.564543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.564609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.564841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.564906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.565205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.565293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.565521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.565587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.565847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.565915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.566212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.566310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.566547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.566582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.566722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.566757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.127 [2024-12-09 04:16:36.566985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.127 [2024-12-09 04:16:36.567019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.127 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.567163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.567219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.567478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.567513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.567609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.567644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.567743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.567778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.567949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.567983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.568092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.568128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.568292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.568345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.568515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.568583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.568821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.568890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.569086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.569151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.569376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.569442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.569703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.569768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.570009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.570075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.570364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.570429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.570716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.570781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.570971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.571036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.571257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.571337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.571603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.571638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.571741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.571776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.571880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.571913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.572041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.572074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.572262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.572415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.572731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.572801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.573044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.573110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.573420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.573500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.573766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.573835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.574129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.574195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.574471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.574538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.574787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.574853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.575111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.575176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.575456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.575523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.575743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.128 [2024-12-09 04:16:36.575810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.128 qpair failed and we were unable to recover it. 00:26:08.128 [2024-12-09 04:16:36.576033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.576099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.576369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.576435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.576693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.576759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.577034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.577103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.577307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.577374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.577578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.577643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.577894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.577963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.578224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.578302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.578539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.578605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.578866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.578934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.579227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.579307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.579557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.579624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.579867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.579901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.580039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.580073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.580249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.580291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.580432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.580486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.580752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.580849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.581106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.581174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.581407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.581478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.581748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.581815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.582033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.582103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.582406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.582474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.582774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.582847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.583089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.583156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.583401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.583467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.583768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.583842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.584104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.584168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.584451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.584516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.584763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.584798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.584966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.585007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.585266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.585312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.585430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.585464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.585646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.129 [2024-12-09 04:16:36.585711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.129 qpair failed and we were unable to recover it. 00:26:08.129 [2024-12-09 04:16:36.585961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.586027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.586302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.586337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.586475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.586510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.586776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.586841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.587082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.587147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.587370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.587437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.587728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.587794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.588004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.588068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.588366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.588442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.588738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.588803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.589062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.589128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.589352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.589419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.589614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.589678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.589915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.589979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.590235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.590323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.590615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.590679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.590926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.590990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.591218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.591300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.591538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.591602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.591829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.591862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.592016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.592050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.592288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.592354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.592607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.592673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.592946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.593011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.593281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.593316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.593458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.593494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.593661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.593695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.593953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.594018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.594218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.594305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.594611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.594686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.594929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.594993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.595178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.595245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.595474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.595539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.595824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.595890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.130 [2024-12-09 04:16:36.596161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.130 [2024-12-09 04:16:36.596225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.130 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.596501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.596565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.596809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.596884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.597177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.597240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.597514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.597579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.597868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.597932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.598175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.598211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.598360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.598394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.598659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.598693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.598840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.598874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.599012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.599063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.599328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.599395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.599654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.599718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.599980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.600044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.600338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.600372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.600512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.600546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.600766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.600830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.601117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.601182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.601439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.601506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.601799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.601832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.601975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.602008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.602222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.602256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.602391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.602425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.602579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.602642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.602899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.602963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.603203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.603268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.603580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.603645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.603854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.603919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.131 [2024-12-09 04:16:36.604208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.131 [2024-12-09 04:16:36.604290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.131 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.604556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.604623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.604875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.604939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.605167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.605231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.605611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.605676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.605897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.605961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.606170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.606235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.606469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.606535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.606831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.606896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.607150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.607184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.607290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.607325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.607460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.607494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.607613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.607647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.607859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.607924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.608214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.608305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.608608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.608673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.608921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.608986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.609222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.609255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.609397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.609431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.609652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.609717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.609964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.610030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.610304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.610371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.610596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.610659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.610954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.611018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.611310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.611376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.611667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.611731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.611950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.612014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.612309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.612375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.612645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.612709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.612953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.613019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.613306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.613373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.613622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.613688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.613892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.132 [2024-12-09 04:16:36.613959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.132 qpair failed and we were unable to recover it. 00:26:08.132 [2024-12-09 04:16:36.614185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.614250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.614563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.614626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.614916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.614981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.615295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.615382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.615633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.615697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.615960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.616022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.616290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.616358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.616615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.616679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.616951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.617015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.617311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.617387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.617576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.617628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.617769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.617803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.617948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.617982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.618084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.618118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.618256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.618302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.618467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.618530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.618813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.618847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.618987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.619022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.619224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.619305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.619560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.619626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.619910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.619973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.620220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.620310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.620579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.620644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.620933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.620997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.621293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.621328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.621468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.621503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.621714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.621777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.622020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.622085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.622300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.622365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.622571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.622635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.622891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.622958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.623175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.623240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.623463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.623530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.133 [2024-12-09 04:16:36.623773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.133 [2024-12-09 04:16:36.623837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.133 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.624139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.624172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.624349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.624384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.624681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.624755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.625051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.625114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.625425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.625492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.625715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.625780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.626023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.626086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.626320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.626384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.626601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.626664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.626894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.626957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.627182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.627244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.627513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.627579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.627838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.627905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.628199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.628262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5c0000b90 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.628636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.628735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.629001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.629069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.629327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.629397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.629696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.629763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.630060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.630125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.630413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.630478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.630776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.630841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.631138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.631213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.631470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.631535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.631823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.631888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.632174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.632238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.632542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.632606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.632822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.632886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.633150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.633214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.633504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.633568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.633858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.633922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.634210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.634295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.634558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.634591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.634735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.134 [2024-12-09 04:16:36.634785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.134 qpair failed and we were unable to recover it. 00:26:08.134 [2024-12-09 04:16:36.635070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.635134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.635413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.635478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.635777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.635852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.636134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.636199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.636506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.636582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.636805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.636870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.637151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.637214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.637529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.637593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.637842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.637881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.638026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.638060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.638163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.638196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.638408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.638474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.638766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.638829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.639081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.639145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.639393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.639458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.639753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.639817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.640114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.640177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.640376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.640440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.640717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.640750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.640873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.640908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.641213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.641289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.641504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.641569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.641874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.641908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.642016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.642050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.642321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.642386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.642628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.642692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.642955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.643019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.643317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.643392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.643677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.643741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.644041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.644115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.644324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.644389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.644650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.644715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.645009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.135 [2024-12-09 04:16:36.645042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.135 qpair failed and we were unable to recover it. 00:26:08.135 [2024-12-09 04:16:36.645150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.645186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.645460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.645525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.645780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.645854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.646155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.646231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.646534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.646599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.646889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.646923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.647090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.647124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.647337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.647403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.647662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.647725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.648010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.648074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.648324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.648390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.648567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.648631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.648876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.648940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.649200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.649267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.649551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.649615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.649842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.649906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.650165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.650199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.650307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.650342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.650509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.650542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.650793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.650857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.651123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.651157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.651267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.651311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.651420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.651453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.651572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.651605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.651719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.651753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.651868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.651901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.652045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.652079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.652219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.652304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.136 qpair failed and we were unable to recover it. 00:26:08.136 [2024-12-09 04:16:36.652453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.136 [2024-12-09 04:16:36.652486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.137 qpair failed and we were unable to recover it. 00:26:08.137 [2024-12-09 04:16:36.652680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.137 [2024-12-09 04:16:36.652741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.137 qpair failed and we were unable to recover it. 00:26:08.137 [2024-12-09 04:16:36.653038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.137 [2024-12-09 04:16:36.653101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.137 qpair failed and we were unable to recover it. 00:26:08.137 [2024-12-09 04:16:36.653310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.137 [2024-12-09 04:16:36.653346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.137 qpair failed and we were unable to recover it. 00:26:08.137 [2024-12-09 04:16:36.653465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.137 [2024-12-09 04:16:36.653498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.137 qpair failed and we were unable to recover it. 00:26:08.137 [2024-12-09 04:16:36.653644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.137 [2024-12-09 04:16:36.653677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.137 qpair failed and we were unable to recover it. 00:26:08.137 [2024-12-09 04:16:36.653784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.137 [2024-12-09 04:16:36.653818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.137 qpair failed and we were unable to recover it. 00:26:08.137 [2024-12-09 04:16:36.653991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.137 [2024-12-09 04:16:36.654023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.137 qpair failed and we were unable to recover it. 00:26:08.137 [2024-12-09 04:16:36.654127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.137 [2024-12-09 04:16:36.654160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.137 qpair failed and we were unable to recover it. 00:26:08.137 [2024-12-09 04:16:36.654327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.137 [2024-12-09 04:16:36.654362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.137 qpair failed and we were unable to recover it. 00:26:08.137 [2024-12-09 04:16:36.654549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.137 [2024-12-09 04:16:36.654614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.137 qpair failed and we were unable to recover it. 00:26:08.137 [2024-12-09 04:16:36.654835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.137 [2024-12-09 04:16:36.654898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.137 qpair failed and we were unable to recover it. 00:26:08.137 [2024-12-09 04:16:36.655148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.137 [2024-12-09 04:16:36.655182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.137 qpair failed and we were unable to recover it. 00:26:08.137 [2024-12-09 04:16:36.655293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.137 [2024-12-09 04:16:36.655326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.137 qpair failed and we were unable to recover it. 00:26:08.137 [2024-12-09 04:16:36.655446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.137 [2024-12-09 04:16:36.655478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.137 qpair failed and we were unable to recover it. 00:26:08.137 [2024-12-09 04:16:36.655626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.137 [2024-12-09 04:16:36.655699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.137 qpair failed and we were unable to recover it. 00:26:08.417 [2024-12-09 04:16:36.655943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.417 [2024-12-09 04:16:36.655977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.417 qpair failed and we were unable to recover it. 00:26:08.417 [2024-12-09 04:16:36.656117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.417 [2024-12-09 04:16:36.656150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.417 qpair failed and we were unable to recover it. 00:26:08.417 [2024-12-09 04:16:36.656259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.417 [2024-12-09 04:16:36.656299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.417 qpair failed and we were unable to recover it. 00:26:08.417 [2024-12-09 04:16:36.656429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.417 [2024-12-09 04:16:36.656464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.417 qpair failed and we were unable to recover it. 00:26:08.417 [2024-12-09 04:16:36.656567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.417 [2024-12-09 04:16:36.656600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.417 qpair failed and we were unable to recover it. 00:26:08.417 [2024-12-09 04:16:36.656709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.417 [2024-12-09 04:16:36.656741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.417 qpair failed and we were unable to recover it. 00:26:08.417 [2024-12-09 04:16:36.656914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.417 [2024-12-09 04:16:36.656948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.417 qpair failed and we were unable to recover it. 00:26:08.417 [2024-12-09 04:16:36.657112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.417 [2024-12-09 04:16:36.657145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.417 qpair failed and we were unable to recover it. 00:26:08.417 [2024-12-09 04:16:36.657257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.417 [2024-12-09 04:16:36.657300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.417 qpair failed and we were unable to recover it. 00:26:08.417 [2024-12-09 04:16:36.657412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.417 [2024-12-09 04:16:36.657445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.417 qpair failed and we were unable to recover it. 00:26:08.417 [2024-12-09 04:16:36.657546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.417 [2024-12-09 04:16:36.657578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.417 qpair failed and we were unable to recover it. 00:26:08.417 [2024-12-09 04:16:36.657688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.657722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.657868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.657901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.658029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.658062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.658170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.658204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.658348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.658381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.658520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.658553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.658686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.658719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.658860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.658893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.659042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.659077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.659243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.659285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.659385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.659418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.659532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.659566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.659702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.659734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.659887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.659920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.660013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.660045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.660153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.660191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.660324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.660357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.660527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.660559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.660838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.660900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.661096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.661157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.661412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.661477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.661737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.661800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.662026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.662090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.662391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.662467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.662719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.662782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.663064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.663129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.663364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.663431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.663652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.663717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.663971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.664034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.664264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.664305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.664437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.664471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.664680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.664745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.664995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.665058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.665298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.665365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.665626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.665690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.665984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.666047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.666332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.666398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.666690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.418 [2024-12-09 04:16:36.666754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.418 qpair failed and we were unable to recover it. 00:26:08.418 [2024-12-09 04:16:36.666996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.667059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.667312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.667378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.667562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.667626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.667916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.667979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.668168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.668242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.668461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.668529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.668790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.668853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.669127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.669161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.669310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.669345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.669547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.669616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.669902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.669965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.670189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.670222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.670385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.670420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.670524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.670568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.670702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.670736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.671004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.671068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.671312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.671379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.671604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.671670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.671918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.671952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.672091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.672147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.672431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.672495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.672791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.672855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.673137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.673170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.673345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.673380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.673590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.673654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.673843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.673906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.674174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.674238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.674594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.674660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.674938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.675001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.675301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.675367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.675672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.675746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.676034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.676107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.676411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.676488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.676783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.676846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.677131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.677165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.677335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.677369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.677587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.677643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.677761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.419 [2024-12-09 04:16:36.677795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.419 qpair failed and we were unable to recover it. 00:26:08.419 [2024-12-09 04:16:36.677961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.677994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.678224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.678315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.678545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.678618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.678875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.678939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.679234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.679314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.679578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.679642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.679867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.679931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.680229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.680313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.680607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.680671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.680916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.680983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.681230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.681312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.681582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.681644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.681895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.681959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.682260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.682342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.682525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.682591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.682876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.682939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.683185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.683249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.683495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.683558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.683831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.683896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.684198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.684262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.684525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.684591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.684892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.684956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.685203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.685268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.685589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.685654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.685946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.686011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.686331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.686397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.686698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.686763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.687053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.687117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.687376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.687441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.687652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.687716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.688000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.688064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.688320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.688385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.688629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.688692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.688911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.688962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.689095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.689129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.689372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.689436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.689710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.689774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.690023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.690090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.420 qpair failed and we were unable to recover it. 00:26:08.420 [2024-12-09 04:16:36.690386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.420 [2024-12-09 04:16:36.690462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.690766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.690830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.691048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.691114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.691325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.691391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.691575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.691639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.691912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.691975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.692245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.692322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.692547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.692613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.692872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.692906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.693025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.693061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.693233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.693315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.693579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.693649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.693887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.693937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.694140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.694192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.694365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.694417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.694676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.694727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.694971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.695022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.695199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.695250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.695494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.695546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.695752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.695803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.696009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.696060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.696351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.696404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.696658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.696708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.696906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.696967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.697176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.697228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.697459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.697510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.697753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.697804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.698013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.698064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.698331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.698384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.698627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.698677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.698897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.698933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.699072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.699105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.699250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.699300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.699445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.699471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.699589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.699615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.699727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.699752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.699829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.699854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.699966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.699992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.421 [2024-12-09 04:16:36.700185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.421 [2024-12-09 04:16:36.700242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.421 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.700444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.700470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.700557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.700583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.700694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.700720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.700802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.700828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.700943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.700969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.701051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.701087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.701231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.701257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.701377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.701412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.701624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.701682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.701913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.701972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.702158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.702228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.702488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.702526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.702663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.702697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.702832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.702866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.703008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.703042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.703173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.703207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.703460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.703496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.703636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.703670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.703790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.703825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.704038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.704079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.704209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.704242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.704382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.704438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.704713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.704768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.704953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.705010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.705222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.705291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.705527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.705582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.705810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.705865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.706031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.706085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.706349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.706405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.706688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.706743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.706995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.707048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.422 qpair failed and we were unable to recover it. 00:26:08.422 [2024-12-09 04:16:36.707329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.422 [2024-12-09 04:16:36.707364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.707501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.707535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.707750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.707804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.708079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.708156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.708371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.708427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.708647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.708701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.708961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.709017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.709173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.709228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.709501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.709555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.709795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.709865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.710094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.710145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.710310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.710360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.710608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.710660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.710825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.710874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.711122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.711172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.711401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.711478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.711730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.711782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.711994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.712070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.712293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.712346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.712524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.712575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.712817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.712867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.713124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.713177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.713365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.713417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.713597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.713647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.713884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.713918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.714057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.714090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.714206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.714239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.714477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.714512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.714681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.714714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.714857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.714890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.715000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.715034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.715140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.715172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.715302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.715344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.715455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.715489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.715645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.715678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.715790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.715822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.715962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.715996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.716145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.716196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.423 qpair failed and we were unable to recover it. 00:26:08.423 [2024-12-09 04:16:36.716365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.423 [2024-12-09 04:16:36.716418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.716615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.716666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.716875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.716925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.717122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.717173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.717415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.717482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.717762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.717812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.718045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.718122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.718320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.718370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.718543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.718606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.718813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.718869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.719019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.719086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.719361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.719439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.719597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.719663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.719788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.719825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.719976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.720038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.720283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.720335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.720521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.720573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.720799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.720850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.721021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.721070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.721218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.721270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.721525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.721598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.721815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.721868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.722066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.722115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.722318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.722372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.722609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.722661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.722860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.722909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.723120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.723170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.723431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.723489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.723669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.723727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.723883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.723932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.724207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.724313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.724483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.724533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.724802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.724876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.725116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.725186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.725371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.725454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.725700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.725772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.726058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.726123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.726313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.424 [2024-12-09 04:16:36.726363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.424 qpair failed and we were unable to recover it. 00:26:08.424 [2024-12-09 04:16:36.726553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.726602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.726833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.726898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.727154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.727202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.727437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.727487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.727737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.727787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.727991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.728055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.728219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.728267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.728518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.728567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.728716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.728764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.728957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.729005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.729199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.729248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.729488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.729552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.729746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.729794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.730004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.730053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.730286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.730336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.730597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.730644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.730885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.730933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.731163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.731197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1818fa0 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.731356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.731393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.731622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.731673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.731869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.731921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.732172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.732222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.732438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.732493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.732725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.732775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.732937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.732986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.733146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.733197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.733415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.733554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.733794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.733846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.734036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.734089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.734354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.734407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.734606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.734654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.734828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.734879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.735059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.735128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.735323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.735374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.735531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.735565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.735709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.735746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.735930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.736003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.736187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.425 [2024-12-09 04:16:36.736237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.425 qpair failed and we were unable to recover it. 00:26:08.425 [2024-12-09 04:16:36.736477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.736527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.736716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.736790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.737031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.737082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.737324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.737379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.737574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.737622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.737798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.737849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.738034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.738084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.738311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.738361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.738584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.738620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.738795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.738829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.739063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.739112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.739337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.739397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b8000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.739542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.739593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.739784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.739841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.740039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.740123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.740375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.740426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.740608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.740643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.740753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.740788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.740893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.740928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.741069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.741103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.741288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.741340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.741588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.741625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.741760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.741793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.742014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.742064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.742254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.742327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.742524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.742582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.742821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.742871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.743151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.743216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.743417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.743466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.743612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.743663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.743890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.743940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.744206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.744325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.744540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.426 [2024-12-09 04:16:36.744590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.426 qpair failed and we were unable to recover it. 00:26:08.426 [2024-12-09 04:16:36.744793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.744844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.745055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.745124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.745395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.745430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.745541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.745576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.745733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.745814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.745977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.746053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.746228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.746291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.746431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.746465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.746631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.746689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.746841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.746886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.747068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.747113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.747293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.747341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.747555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.747601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.747779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.747824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.748067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.748116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.748343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.748393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.748569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.748617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.748813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.748863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.749060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.749107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.749296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.749342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.749509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.749556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.749695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.749743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.749891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.749936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.750106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.750152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.750389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.750424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.750541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.750575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.750745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.750798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.750975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.751021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.751200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.751248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.751420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.751467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.751657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.751702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.751883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.751931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.752131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.752179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.752379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.752426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.752648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.752693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.752884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.752931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.753149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.753215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.427 [2024-12-09 04:16:36.753426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.427 [2024-12-09 04:16:36.753473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.427 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.753663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.753710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.753874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.753908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.754039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.754073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.754245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.754289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.754493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.754538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.754746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.754792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.754983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.755029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.755244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.755319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.755491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.755537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.755727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.755774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.755893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.755946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.756126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.756174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.756356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.756405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.756540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.756585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.756766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.756819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.757001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.757048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.757255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.757312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.757505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.757550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.757777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.757811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.757951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.757985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.758136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.758185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.758382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.758429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.758614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.758668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.758885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.758931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.759149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.759195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.759391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.759438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.759604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.759650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.759834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.759868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.760047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.760081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.760267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.760325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.760500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.760547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.760763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.760809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.760999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.761045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.761233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.761290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.761479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.761525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.761704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.761749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.761983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.762017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.428 qpair failed and we were unable to recover it. 00:26:08.428 [2024-12-09 04:16:36.762220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.428 [2024-12-09 04:16:36.762265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.762497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.762543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.762723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.762771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.762982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.763027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.763218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.763264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.763482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.763528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.763703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.763748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.763935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.763981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.764110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.764160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.764298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.764333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.764465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.764499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.764631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.764665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.764797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.764831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.764938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.764978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.765199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.765233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.765343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.765377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.765492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.765525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.765676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.765730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.765948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.765994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.766160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.766210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.766349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.766384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.766552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.766585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.766777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.766823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.766987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.767035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.767251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.767307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.767428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.767473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.767650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.767697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.767917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.767962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.768113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.768158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.768329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.768377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.768550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.768583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.768702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.768736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.768911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.768959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.769110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.769156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.769356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.769403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.769620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.769667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.769887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.769932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.429 [2024-12-09 04:16:36.770139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.429 [2024-12-09 04:16:36.770203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.429 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.770425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.770472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.770656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.770702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.770849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.770897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.771151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.771216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.771467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.771501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.771669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.771703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.771906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.771953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.772126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.772171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.772352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.772399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.772553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.772603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.772784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.772831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.773010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.773056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.773252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.773312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.773500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.773547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.773724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.773771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.773912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.773966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.774191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.774225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.774406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.774442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.774642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.774688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.774821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.774866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.775047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.775093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.775315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.775350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.775486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.775520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.775655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.775689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.775858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.775892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.776050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.776098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.776294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.776342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.776532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.776565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.776739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.776792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.777020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.777066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.777252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.777328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.777504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.777550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.777761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.777807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.778023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.778069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.778255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.778315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.778494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.778538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.778721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.778769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.430 qpair failed and we were unable to recover it. 00:26:08.430 [2024-12-09 04:16:36.778978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.430 [2024-12-09 04:16:36.779023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.779207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.779254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.779476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.779523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.779711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.779759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.779920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.779965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.780158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.780207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.780362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.780411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.780604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.780651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.780871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.780918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.781113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.781159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.781383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.781430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.781654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.781700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.781908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.781954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.782156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.782220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.782495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.782560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.782776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.782812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.782952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.782987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.783192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.783259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.783520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.783585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.783840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.783905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.784113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.784196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.784443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.784508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.784665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.784742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.784984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.785051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.785321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.785368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.785553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.785587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.785727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.785761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.785912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.785959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.786137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.786183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.786417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.786465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.786654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.786699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.786921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.786967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.787193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.787259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.431 qpair failed and we were unable to recover it. 00:26:08.431 [2024-12-09 04:16:36.787472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.431 [2024-12-09 04:16:36.787517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.787736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.787781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.787954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.788001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.788205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.788268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.788478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.788512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.788740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.788809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.789091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.789155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.789408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.789475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.789717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.789784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.790060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.790124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.790371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.790436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.790661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.790728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.790949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.791033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.791320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.791386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.791590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.791636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.791760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.791806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.791979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.792025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.792290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.792371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.792517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.792565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.792785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.792830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.793004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.793050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.793195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.793244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.793461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.793507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.793689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.793736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.793926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.793972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.794185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.794229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.794464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.794511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.794752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.794818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.794981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.795067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.795333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.795381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.795598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.795644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.795827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.795872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.796120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.796185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.796357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.796404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.796585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.796638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.796829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.796877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.797050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.797095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.432 qpair failed and we were unable to recover it. 00:26:08.432 [2024-12-09 04:16:36.797285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.432 [2024-12-09 04:16:36.797333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.797504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.797551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.797731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.797777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.797955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.798002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.798263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.798352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.798548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.798595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.798813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.798859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.799006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.799053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.799224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.799270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.799460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.799506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.799691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.799738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.799959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.800024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.800321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.800369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.800543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.800590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.800845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.800909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.801121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.801196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.801429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.801477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.801676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.801741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.801934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.801980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.802174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.802223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.802435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.802485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.802671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.802719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.802943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.802990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.803178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.803224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.803428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.803475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.803615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.803661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.803842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.803888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.804079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.804125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.804321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.804369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.804543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.804589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.804771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.804817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.804988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.805034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.805208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.805254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.805485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.805532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.805751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.805798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.805985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.806031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.806207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.806253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.806453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.433 [2024-12-09 04:16:36.806500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.433 qpair failed and we were unable to recover it. 00:26:08.433 [2024-12-09 04:16:36.806697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.806746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.806919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.806967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.807182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.807228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.807436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.807483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.807674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.807720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.807875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.807923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.808101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.808149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.808338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.808386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.808578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.808624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.808762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.808808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.809013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.809059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.809251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.809326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.809542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.809589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.809765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.809810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.809996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.810042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.810185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.810231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.810468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.810514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.810674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.810726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.810910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.810958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.811168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.811213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.811376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.811424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.811637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.811682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.811867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.811912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.812122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.812185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.812372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.812420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.812639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.812703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.812932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.812997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.813189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.813222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.813405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.813440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.813542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.813576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.813727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.813760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.813939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.814001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.814202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.814263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.814445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.814478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.814624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.814658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.814866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.814928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.815196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.815257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.434 qpair failed and we were unable to recover it. 00:26:08.434 [2024-12-09 04:16:36.815459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.434 [2024-12-09 04:16:36.815493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.815631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.815666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.815881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.815926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.816115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.816162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.816382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.816417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.816517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.816571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.816776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.816833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.817026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.817073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.817330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.817366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.817477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.817512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.817651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.817685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.817822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.817857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.818044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.818103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.818351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.818386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.818510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.818544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.818647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.818682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.818916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.818975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.819247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.819337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.819444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.819478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.819644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.819692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.819873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.819941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.820118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.820164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.820391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.820426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.820548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.820582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.820721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.820755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.820951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.821009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.821159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.821207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.821362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.821397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.821534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.821568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.821758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.821806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.821991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.822046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.822298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.822351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.822468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.822502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.822644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.822678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.822842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.822888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.823116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.823162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.823397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.823431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.823594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.435 [2024-12-09 04:16:36.823642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.435 qpair failed and we were unable to recover it. 00:26:08.435 [2024-12-09 04:16:36.823822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.823870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.824090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.824136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.824264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.824332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.824474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.824508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.824618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.824652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.824825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.824860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.825046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.825093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.825269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.825363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.825508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.825542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.825686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.825720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.825822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.825856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.825974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.826019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.826207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.826253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.826433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.826467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.826615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.826648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.826829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.826876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.827058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.827103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.827245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.827305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.827448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.827482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.827631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.827665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.827853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.827898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.828082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.828129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.828297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.828355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.828491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.828524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.828746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.828791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.828942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.828988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.829131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.829186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.829385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.829420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.829520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.829556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.829728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.829762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.829985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.830019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.830139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.830173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.830283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.436 [2024-12-09 04:16:36.830318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.436 qpair failed and we were unable to recover it. 00:26:08.436 [2024-12-09 04:16:36.830460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.437 [2024-12-09 04:16:36.830495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.437 qpair failed and we were unable to recover it. 00:26:08.437 [2024-12-09 04:16:36.830723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.437 [2024-12-09 04:16:36.830769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.437 qpair failed and we were unable to recover it. 00:26:08.437 [2024-12-09 04:16:36.830953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.437 [2024-12-09 04:16:36.830987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.437 qpair failed and we were unable to recover it. 00:26:08.437 [2024-12-09 04:16:36.831168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.437 [2024-12-09 04:16:36.831220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.437 qpair failed and we were unable to recover it. 00:26:08.437 [2024-12-09 04:16:36.831400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.437 [2024-12-09 04:16:36.831447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.437 qpair failed and we were unable to recover it. 00:26:08.437 [2024-12-09 04:16:36.831595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.437 [2024-12-09 04:16:36.831660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.437 qpair failed and we were unable to recover it. 00:26:08.437 [2024-12-09 04:16:36.831812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.437 [2024-12-09 04:16:36.831846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.437 qpair failed and we were unable to recover it. 00:26:08.437 [2024-12-09 04:16:36.831997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.437 [2024-12-09 04:16:36.832042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.437 qpair failed and we were unable to recover it. 00:26:08.437 [2024-12-09 04:16:36.832235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.437 [2024-12-09 04:16:36.832293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.437 qpair failed and we were unable to recover it. 00:26:08.437 [2024-12-09 04:16:36.832511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.437 [2024-12-09 04:16:36.832558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.437 qpair failed and we were unable to recover it. 00:26:08.437 [2024-12-09 04:16:36.832734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.437 [2024-12-09 04:16:36.832779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe5b4000b90 with addr=10.0.0.2, port=4420 00:26:08.437 qpair failed and we were unable to recover it. 00:26:08.437 [2024-12-09 04:16:36.832957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, err